Drivers
-
Off topic electronics question: Digital circuit means that it is clock based, the circuit board functions at a certain clock frequency. A washing machine digital circuit board functions at a certain clock rate just like a PC motherboard. Is that true?
Digital circuits are not by definition clocked, if you by that mean that there is a central clock setting the speed of "everything". Circuits may be asynchronous, going into a new state whenever there is a change in the inputs to the circuit. Think of a simple adder: You change the value on one of its inputs, and as soon as the voltages have stabilized, the sum of the two inputs are available at the output. This happens as quickly as the transistors are able to do all the necessary switching on and off - the adder doesn't sit down waiting for some 'Now Thou Shalt Add' clock pulse. You can put together smaller circuits into larger ones, with the smaller circuits interchanging signals at arbitrary times. Think of character based RS-232 ("COM port"): The line is completely idle between transmissions. When the sender wants to transfer a byte, he alerts the receiver with a 'start bit' (not carrying any data), at any time, independent of any clock ticks. This is to give the receiver some time to activate its circuits. After the start bit follows 8 data bits and a 'stop bit', to give the receiver time to handle the received byte before the next one comes in, with another start bit. The bits have a width (i.e. duration) given by the line speed, but not aligned with any external clock ticks. Modules within a larger circuit, such as a complete CPU, may communicate partly or fully by similar asynchronous signals. In a modern CPU with caches, pipelining, lookahead of various kinds, ... not everything start immediately at the tick. Some circuits may have to wait e.g. until the value is delivered to them from cache or from a register: That will happen somewhat later within the clock cycle. For a store, the address calculation must report 'Ready for data!' before the register value can be put out. Sometimes, you may encounter circuits where the main clock is subdivided into smaller time units by a 'clock multiplier' (PCs usually have a multiplier that creates the main clock by subdividing pulses from a lower frequency crystal; the process can be repeated for even smaller units), but if you look inside a CPU, you should be prepared for a lot of signal lines not going by the central clock. The great advantage of un-clocked logic is that it can work as fast the transistors are able to switch: No circuit makes a halt waiting for a clock tick telling it to go on - it goes on immediately. The disadvantage is that unless you keep a very close eye on the switching speed of the transistors, you may run into synchroniz
-
Digital circuits are not by definition clocked, if you by that mean that there is a central clock setting the speed of "everything". Circuits may be asynchronous, going into a new state whenever there is a change in the inputs to the circuit. Think of a simple adder: You change the value on one of its inputs, and as soon as the voltages have stabilized, the sum of the two inputs are available at the output. This happens as quickly as the transistors are able to do all the necessary switching on and off - the adder doesn't sit down waiting for some 'Now Thou Shalt Add' clock pulse. You can put together smaller circuits into larger ones, with the smaller circuits interchanging signals at arbitrary times. Think of character based RS-232 ("COM port"): The line is completely idle between transmissions. When the sender wants to transfer a byte, he alerts the receiver with a 'start bit' (not carrying any data), at any time, independent of any clock ticks. This is to give the receiver some time to activate its circuits. After the start bit follows 8 data bits and a 'stop bit', to give the receiver time to handle the received byte before the next one comes in, with another start bit. The bits have a width (i.e. duration) given by the line speed, but not aligned with any external clock ticks. Modules within a larger circuit, such as a complete CPU, may communicate partly or fully by similar asynchronous signals. In a modern CPU with caches, pipelining, lookahead of various kinds, ... not everything start immediately at the tick. Some circuits may have to wait e.g. until the value is delivered to them from cache or from a register: That will happen somewhat later within the clock cycle. For a store, the address calculation must report 'Ready for data!' before the register value can be put out. Sometimes, you may encounter circuits where the main clock is subdivided into smaller time units by a 'clock multiplier' (PCs usually have a multiplier that creates the main clock by subdividing pulses from a lower frequency crystal; the process can be repeated for even smaller units), but if you look inside a CPU, you should be prepared for a lot of signal lines not going by the central clock. The great advantage of un-clocked logic is that it can work as fast the transistors are able to switch: No circuit makes a halt waiting for a clock tick telling it to go on - it goes on immediately. The disadvantage is that unless you keep a very close eye on the switching speed of the transistors, you may run into synchroniz
Does a clocked digital circuit board have “cells” resembling CPU registers, something that plays as persistent memory between flashes/waves of current. I’m just trying to visualize how a board you plug into a motherboard works. In the short blackout between cycles information has to be kept somewhere.
-
Does a clocked digital circuit board have “cells” resembling CPU registers, something that plays as persistent memory between flashes/waves of current. I’m just trying to visualize how a board you plug into a motherboard works. In the short blackout between cycles information has to be kept somewhere.
Register-like mechanisms are present in almost all kinds of circuit boards, especially when dealing with interface to external units. I don't think 'cells' is a common term; they are more commonly called registers, occasionally 'latches' (for registers catching input values). Registers on peripheral interfaces are used for preserving internal values as well, not just values input or output. Quite often, they are addressed in similar ways. With memory mapped IO (which is quite common in modern machines, except for the X86/X64), one address is a straight variable, another address sets or returns control bits or data values in the physical interface circuitry, a third address is a reference to the internal status word of the driver. So which is a 'register', which is a 'variable', which one is actually latching an input or output value? The borders are blurry. Whether you call it a register, a variable, a latch or something else: It is there to preserve value, usually for an extended period of time (when seen in relationship to clock cycles). When some interface card is assigned 8 word locations (i.e. memory addresses) in the I/O space of a memory mapped machine, don't be surprised to see them referred to as 'registers', although you see them as if the were variable locations. When you address the variables / registers in you program, the address (or lower part of it) is sent to the interface logic, which may interpret it in any way it finds useful. Maybe writing to it will save the data in a buffer on the interface. Maybe it will go to line control circuitry to set physical control lines coming out of the interface. There is no absolute standard; it is all up to the interface.
Religious freedom is the freedom to say that two plus two make five.
-
Register-like mechanisms are present in almost all kinds of circuit boards, especially when dealing with interface to external units. I don't think 'cells' is a common term; they are more commonly called registers, occasionally 'latches' (for registers catching input values). Registers on peripheral interfaces are used for preserving internal values as well, not just values input or output. Quite often, they are addressed in similar ways. With memory mapped IO (which is quite common in modern machines, except for the X86/X64), one address is a straight variable, another address sets or returns control bits or data values in the physical interface circuitry, a third address is a reference to the internal status word of the driver. So which is a 'register', which is a 'variable', which one is actually latching an input or output value? The borders are blurry. Whether you call it a register, a variable, a latch or something else: It is there to preserve value, usually for an extended period of time (when seen in relationship to clock cycles). When some interface card is assigned 8 word locations (i.e. memory addresses) in the I/O space of a memory mapped machine, don't be surprised to see them referred to as 'registers', although you see them as if the were variable locations. When you address the variables / registers in you program, the address (or lower part of it) is sent to the interface logic, which may interpret it in any way it finds useful. Maybe writing to it will save the data in a buffer on the interface. Maybe it will go to line control circuitry to set physical control lines coming out of the interface. There is no absolute standard; it is all up to the interface.
Religious freedom is the freedom to say that two plus two make five.
I’m learning a lot from your posts tronderen. Thank you for taking the time to write down thorough explanations. I bet there are other newbies around too who find them useful
-
Register-like mechanisms are present in almost all kinds of circuit boards, especially when dealing with interface to external units. I don't think 'cells' is a common term; they are more commonly called registers, occasionally 'latches' (for registers catching input values). Registers on peripheral interfaces are used for preserving internal values as well, not just values input or output. Quite often, they are addressed in similar ways. With memory mapped IO (which is quite common in modern machines, except for the X86/X64), one address is a straight variable, another address sets or returns control bits or data values in the physical interface circuitry, a third address is a reference to the internal status word of the driver. So which is a 'register', which is a 'variable', which one is actually latching an input or output value? The borders are blurry. Whether you call it a register, a variable, a latch or something else: It is there to preserve value, usually for an extended period of time (when seen in relationship to clock cycles). When some interface card is assigned 8 word locations (i.e. memory addresses) in the I/O space of a memory mapped machine, don't be surprised to see them referred to as 'registers', although you see them as if the were variable locations. When you address the variables / registers in you program, the address (or lower part of it) is sent to the interface logic, which may interpret it in any way it finds useful. Maybe writing to it will save the data in a buffer on the interface. Maybe it will go to line control circuitry to set physical control lines coming out of the interface. There is no absolute standard; it is all up to the interface.
Religious freedom is the freedom to say that two plus two make five.
I have a question about PC motherboards. What is the purpose of all the chips on a motherboard. Some of them help various slots to function but I’m wondering if there is a hierarchy among them. Is there a top motherboard chip that governs the communication between the CPU and everything else found on the motherboard ( slots, drives, ports etc)? How does the Operating System perform I/O operations? Does he talk directly to the Hard Drive or is the communication mediated by the motherboard? If I want to make my own OS how do I talk with the Hard Drive from Assembly?
-
What happens if a driver developer sends a command to a sound board ( just a random pick) which the board doesn’t recognize/doesn’t know how to handle? Could that cause a crash of the sound board and require a restart? If the data on the sound board gets corrupted could that make the entire OS unstable?
-
I believe way back when there was a graphics card for either the Apple II or IBM-PC, before VGA, that if used incorrectly would destroy the CRT it was hooked to.
There were video cards a long time ago - that is 40 to 50 years - where you did not select the scan rate from a table of predefined horizontal and vertical frequencies, but you set the frequencies by absolute values. If you set one of them to zero (usually by forgetting to set it, or setting it failed for some reason and you didn't handle it immediately), the electron beam wouldn't be moving at all but remain steady in one position. If you set one but not the other, the electron beam would continuously redraw a single scan line or column. The phosphor on the screen surface is not prepared for being continually exited at full strength, and it would "burn out", loose its capacity to shine. So you would have a black spot, or a black horizontal or vertical line across the screen. This damage was permanent and could not be repaired. This is so long ago that I never worked with, or even saw, a video card that allowed me to do this damage to my screen. I do not know what that standard (if it was a standard) was called. My first video cards were 640x480 VGA, heavily stressed in the marketing that you could not destroy your screen whatever you sent to it. So the memory of these 'dangerous' video cards where still vivid in the 1980s (but I do believe that VGA's predecessor, EGA, was also 'safe'). Related to this "burn out" danger, was the "burn in": Everyone who had a PC in the 1990s remember the great selection of screen savers back then: After a few minutes of idleness, a screen saver would kick in, taking control of the screen, usually with some dynamically changing, generated graphics. Some of them were great - I particularly remember the plumbing, where every now and then a teapot appeared in the joints: We could sit for minutes waiting to see if any teapot would pop up :-) These screen savers did save your screen: If you left you PC and screen turned on when leaving your office (or for your home PC, when going to bed) with the background desktop shining, the bright picture elements in icons, status line or whatever, would slowly "consume" the phosphor in those spots. After some weeks or months, when you slide a white background window over those areas, you might see a shadow of your fixed location icons and other elements in what should have been a pure white surface. The image of the desktop was "burnt into" the screen. Note that this has nothing to do with your display card, driver or other software: The signal to the screen is that of a pure white surface; the screen is incapable of dis
-
I have a question about PC motherboards. What is the purpose of all the chips on a motherboard. Some of them help various slots to function but I’m wondering if there is a hierarchy among them. Is there a top motherboard chip that governs the communication between the CPU and everything else found on the motherboard ( slots, drives, ports etc)? How does the Operating System perform I/O operations? Does he talk directly to the Hard Drive or is the communication mediated by the motherboard? If I want to make my own OS how do I talk with the Hard Drive from Assembly?
There is almost as many answers to this question as there are motherboards :-) Or at least as there are CPU chip generations. Over the years, things have changed dramatically. In very old days, I/O signals could come directly from, or go directly to, the CPU pins. Extra components on the board were essentially for adapting the physical signals - RS-232 signals could go up to +/- 25V, which won't make the CPU chip happy. Gradually, we got support chips that e.g. took care of the entire RS-232 protocol, with timings, bit rate, start/stop bits etc. handled by a separate chip. Similar for the 'LPT' (Line PrintTer) parallel port; it was handled by a separate chip. The CPU usually had a single interrupt line - or possibly two, one non-maskable and one maskable. Soon you would have several interrupt sources, and another MB chip was added: An interrupt controller, with several input lines and internal logic for multiplexing and prioritizing them. Another chip might be a clock circuit. DMA used to be a separate chip. For the 286 CPU, floating point math required a supporting chip (287). Other chips had the memory management (paging, segment handling etc.) done by a separate chip: Adding the MMS chip to the MC68000 (1979) gave it virtual memory capability comparable to the 386 (1985). Not until 68030 (1987) was the MMS logic moved onto the main CPU chip. There were some widespread "standard" support chips for basic things like clocking, interrupts and other basic functions. These were referred to as the chipset for the CPU. We still have that, but nowadays technology allows us to put all the old support functions and then some (quite a few!) onto a single support chip, of the same size and complexity as the CPU itself. Even though it is a single chip, we still call it a 'chipset'. Also, a number of functions essential to the CPU, such as clocking, memory management, cache (which started out being separate chips) were moved onto the CPU chip rather than being included in 'the chipset chip'. You can view the chipset as an extension of the CPU. You may call it the 'top level' MB chip, if you like. In principle, it could have been merged into the main CPU, but for a couple of reasons, it is not: First, it acts as a (de)multiplexer for numerous I/O-devices, each requiring a number of lines / pins. The CPU already has a huge number of pins (rarely under a thousand on modern CPUs, it can be up to two thousand). The CPU speaks to the chipset over an (extremely) fast connection, where it can send/receiv
-
I have a question about PC motherboards. What is the purpose of all the chips on a motherboard. Some of them help various slots to function but I’m wondering if there is a hierarchy among them. Is there a top motherboard chip that governs the communication between the CPU and everything else found on the motherboard ( slots, drives, ports etc)? How does the Operating System perform I/O operations? Does he talk directly to the Hard Drive or is the communication mediated by the motherboard? If I want to make my own OS how do I talk with the Hard Drive from Assembly?
Calin Negru wrote:
How does the Operating System perform I/O operations? Does he talk directly to the Hard Drive or is the communication mediated by the motherboard?
You most definitely go via the motherboard! The OS talks to the chipset, which talks to some I/O-bus - for a disk, it is typically SATA, USB or PCI-e. These present an abstracted view of the disk, making all disks look the same, except for obvious differences such as capacity. At the other (i.e. the disk) end is another piece of logic, mapping the abstract disk onto a concrete, specific one (i.e. mapping a logical address to a surface number, track number, sector number), usually handling a disk cache. In modern disks, the tasks are so complex that they certainly require an embedded processor. USB and PCI-e are both general protocols for various devices, not just disks. Sending a read request and receiving the data, or sending a write request and receiving an OK confirmation is very much like sending a message on the internet: The software prepares a message header and message body according to specified standards, and pass it to the electronics. The response (status, possibly with data read) is received similar to an internet message. Writing a disk block to an USB disk (regardless of the disk type - spinning, USB stick, flash disk, ...) or writing a document to an USB printer is done in similar ways, although the header fields may vary (but standards such as USB define message layouts for a lot of device classes so that all devices of a certain class use similar fields). All protocols are defined as a set of layers: The physical layer is always electronics - things like signal levels, bit rates etc. The bit stream is split into blocks, 'frames', with well defined markers, maybe length fields and maybe checksum for each frame (that varies with protocol); that is the link layer. There may be a processor doing this work, but externally, it appears as if it is hardware. Then, at the network layer, the data field in the frame is filled in with an address at the top, and usually a number of other management fields. For this, some sort of programmed logic (an embedded processor, or dedicated interface logic) is doing the job - but we are still outside the CPU/chipset. The chipset feeds the information to the interface logic, but doesn't address the USB or PCI-e frame as such, or the packet created (within the link frame) by the network layer. Both USB and PCI-e defines
-
There is almost as many answers to this question as there are motherboards :-) Or at least as there are CPU chip generations. Over the years, things have changed dramatically. In very old days, I/O signals could come directly from, or go directly to, the CPU pins. Extra components on the board were essentially for adapting the physical signals - RS-232 signals could go up to +/- 25V, which won't make the CPU chip happy. Gradually, we got support chips that e.g. took care of the entire RS-232 protocol, with timings, bit rate, start/stop bits etc. handled by a separate chip. Similar for the 'LPT' (Line PrintTer) parallel port; it was handled by a separate chip. The CPU usually had a single interrupt line - or possibly two, one non-maskable and one maskable. Soon you would have several interrupt sources, and another MB chip was added: An interrupt controller, with several input lines and internal logic for multiplexing and prioritizing them. Another chip might be a clock circuit. DMA used to be a separate chip. For the 286 CPU, floating point math required a supporting chip (287). Other chips had the memory management (paging, segment handling etc.) done by a separate chip: Adding the MMS chip to the MC68000 (1979) gave it virtual memory capability comparable to the 386 (1985). Not until 68030 (1987) was the MMS logic moved onto the main CPU chip. There were some widespread "standard" support chips for basic things like clocking, interrupts and other basic functions. These were referred to as the chipset for the CPU. We still have that, but nowadays technology allows us to put all the old support functions and then some (quite a few!) onto a single support chip, of the same size and complexity as the CPU itself. Even though it is a single chip, we still call it a 'chipset'. Also, a number of functions essential to the CPU, such as clocking, memory management, cache (which started out being separate chips) were moved onto the CPU chip rather than being included in 'the chipset chip'. You can view the chipset as an extension of the CPU. You may call it the 'top level' MB chip, if you like. In principle, it could have been merged into the main CPU, but for a couple of reasons, it is not: First, it acts as a (de)multiplexer for numerous I/O-devices, each requiring a number of lines / pins. The CPU already has a huge number of pins (rarely under a thousand on modern CPUs, it can be up to two thousand). The CPU speaks to the chipset over an (extremely) fast connection, where it can send/receiv
I’ve had a good time reading that
-
Calin Negru wrote:
How does the Operating System perform I/O operations? Does he talk directly to the Hard Drive or is the communication mediated by the motherboard?
You most definitely go via the motherboard! The OS talks to the chipset, which talks to some I/O-bus - for a disk, it is typically SATA, USB or PCI-e. These present an abstracted view of the disk, making all disks look the same, except for obvious differences such as capacity. At the other (i.e. the disk) end is another piece of logic, mapping the abstract disk onto a concrete, specific one (i.e. mapping a logical address to a surface number, track number, sector number), usually handling a disk cache. In modern disks, the tasks are so complex that they certainly require an embedded processor. USB and PCI-e are both general protocols for various devices, not just disks. Sending a read request and receiving the data, or sending a write request and receiving an OK confirmation is very much like sending a message on the internet: The software prepares a message header and message body according to specified standards, and pass it to the electronics. The response (status, possibly with data read) is received similar to an internet message. Writing a disk block to an USB disk (regardless of the disk type - spinning, USB stick, flash disk, ...) or writing a document to an USB printer is done in similar ways, although the header fields may vary (but standards such as USB define message layouts for a lot of device classes so that all devices of a certain class use similar fields). All protocols are defined as a set of layers: The physical layer is always electronics - things like signal levels, bit rates etc. The bit stream is split into blocks, 'frames', with well defined markers, maybe length fields and maybe checksum for each frame (that varies with protocol); that is the link layer. There may be a processor doing this work, but externally, it appears as if it is hardware. Then, at the network layer, the data field in the frame is filled in with an address at the top, and usually a number of other management fields. For this, some sort of programmed logic (an embedded processor, or dedicated interface logic) is doing the job - but we are still outside the CPU/chipset. The chipset feeds the information to the interface logic, but doesn't address the USB or PCI-e frame as such, or the packet created (within the link frame) by the network layer. Both USB and PCI-e defines
What does driver development look like? What language do they use? Do they use machine code? Physically the exchange between the motherboard and an extension board is raw bits. Do they convert that to human readable numbers when they write drivers? Is a machine instruction a 32 bit word/variable? And yet another question. Processor machine instructions. Processor Alu deals with true numbers not machine instructions. The type of operation ( addition, substraction etc.) might be a machine instruction but everything else is just numbers. There are other areas of the processor that are mostly machine instruction oriented. Is that how it works?
-
What does driver development look like? What language do they use? Do they use machine code? Physically the exchange between the motherboard and an extension board is raw bits. Do they convert that to human readable numbers when they write drivers? Is a machine instruction a 32 bit word/variable? And yet another question. Processor machine instructions. Processor Alu deals with true numbers not machine instructions. The type of operation ( addition, substraction etc.) might be a machine instruction but everything else is just numbers. There are other areas of the processor that are mostly machine instruction oriented. Is that how it works?
-
What does driver development look like? What language do they use? Do they use machine code? Physically the exchange between the motherboard and an extension board is raw bits. Do they convert that to human readable numbers when they write drivers? Is a machine instruction a 32 bit word/variable? And yet another question. Processor machine instructions. Processor Alu deals with true numbers not machine instructions. The type of operation ( addition, substraction etc.) might be a machine instruction but everything else is just numbers. There are other areas of the processor that are mostly machine instruction oriented. Is that how it works?
In principle, a driver is very much like another method / function / routine or whatever you call it. It may be written in any language. For all practical purposes, that is any compiled language - for performance/size reasons, interpreted languages are obviously not suited. There is one requirement for bottom level drivers: The language must, one way or the other, allow you to access interface registers, hardware status indicators etc. If you program in assembler (machine code), you have all facilities right at hand. In the old days, all drivers were written in assembler, without the ease of programming provided by medium/high level languages for data structured, flow control etc. So from the 1970s, medium level languages came into use, providing data and flow structures, plus mechanisms for accessing hardware - e.g. by allowing 'inline assembly': Most commonly, a special marker at the start of a source code line told that this line is not a high level statement but an assembly instruction. Usually, variable names in the high level code is available as defined symbols for the assembler instructions, but you must know how to address it (e.g. on the stack, as a static location etc.) in machine code. The transition to medium/high level languages started in the late 1970 / early 1980s. Yet, for many architectures / OSes, with all the old drivers written in assembler, it was often difficult to introduce medium/high level languages for new drivers. Maybe there wasn't even a suitable language available for the given architecture/OS, of which there was plenty in those days. So for established environments, assembler prevailed for many years. I guess that some drivers are written in assembler even today. If the language doesn't provide inline assembler or equivalent, you may write a tiny function in assembler to be called from a high level language. Maybe the function body is a single instruction, but the 'red tape' for handling the call procedures make up dozen instructions. So this is not a very efficient solution, but maybe the only one available. Some compilers provide 'intrinsics': Those are function-looking statements in the high level language, but the compiler knows them and does not generate a function call, but a single machine instruction (or possibly a small handful) right in the instruction flow generated from the surrounding code. E.g. in the MS C++ compiler for ARM, you can generate the vector/array instructions of the CPU by 'calling' an intrinsic with the name of the
-
What does driver development look like? What language do they use? Do they use machine code? Physically the exchange between the motherboard and an extension board is raw bits. Do they convert that to human readable numbers when they write drivers? Is a machine instruction a 32 bit word/variable? And yet another question. Processor machine instructions. Processor Alu deals with true numbers not machine instructions. The type of operation ( addition, substraction etc.) might be a machine instruction but everything else is just numbers. There are other areas of the processor that are mostly machine instruction oriented. Is that how it works?
Calin Negru wrote:
Is a machine instruction a 32 bit word/variable?
This is something I have been fighting since my student days! :-) What resides in a computer isn't "really" numbers. Or characters. Or instructions. Saying that an alphabetic character "really is stored as a number inside the machine" is plain BS! RAM, register and whatever else holds bit patterns, period. Not even zeroes and ones, in any numeric sense. It is charged/uncharged. High/low voltage. High/low current. On/off. Not numbers. When a stream of bits comes out of machine, we may have a convention for presenting e.g. a given sequence of bits as the character 'A'. That is a matter of presentation. Alternately, we may present it as the decimal number 49. This is no more a "true" presentation than 'A'. Or a dark grey dot in a monochrome raster image. If we have agreed upon the semantics of a given byte as an 'A', claiming anything else is simply wrong. The only valid alternative is to treat the byte as an uninterpreted bit string. And that is not as a sequence of numeric 0 and 1, which is an (incorrect) interpretation. A CPU may interpret a bit sequence as an instruction. Presumably, this is also the semantics intended by the compiler generating the bit sequence. The semantics is that of, say, the ALU adding two registers - the operation itself, not a description of it. You may (rightfully) say: "But I cannot do that operation when I read the code". So for readability reasons, we make an (incorrect) presentation, grouping bits by 4 and showing as hexadecimal digits. We may go further, interpreting a number of bits as an index into a string table where we find the name of the operation. This doesn't change the bit sequence into a printable string; it remains a bit pattern, intended for the CPU's interpretation as a set of operations. So it all is bit patterns. If we feed the bit patters to a printer, we assume that the printer will interpret them as characters; hopefully that is correct. If we feed bit patterns to the CPU, we assume that it will interpret them as instructions. Usually, we keep those bit patterns that we intend to be interpreted as instructions by a CPU separate from those bit patterns we intend to be interpreted as characters, integer or real numbers, sound or images. That is mostly a matter of orderliness. And we cannot always keep a watertight bulkhead between those bit patterns intended for text o
-
Calin Negru wrote:
Is a machine instruction a 32 bit word/variable?
This is something I have been fighting since my student days! :-) What resides in a computer isn't "really" numbers. Or characters. Or instructions. Saying that an alphabetic character "really is stored as a number inside the machine" is plain BS! RAM, register and whatever else holds bit patterns, period. Not even zeroes and ones, in any numeric sense. It is charged/uncharged. High/low voltage. High/low current. On/off. Not numbers. When a stream of bits comes out of machine, we may have a convention for presenting e.g. a given sequence of bits as the character 'A'. That is a matter of presentation. Alternately, we may present it as the decimal number 49. This is no more a "true" presentation than 'A'. Or a dark grey dot in a monochrome raster image. If we have agreed upon the semantics of a given byte as an 'A', claiming anything else is simply wrong. The only valid alternative is to treat the byte as an uninterpreted bit string. And that is not as a sequence of numeric 0 and 1, which is an (incorrect) interpretation. A CPU may interpret a bit sequence as an instruction. Presumably, this is also the semantics intended by the compiler generating the bit sequence. The semantics is that of, say, the ALU adding two registers - the operation itself, not a description of it. You may (rightfully) say: "But I cannot do that operation when I read the code". So for readability reasons, we make an (incorrect) presentation, grouping bits by 4 and showing as hexadecimal digits. We may go further, interpreting a number of bits as an index into a string table where we find the name of the operation. This doesn't change the bit sequence into a printable string; it remains a bit pattern, intended for the CPU's interpretation as a set of operations. So it all is bit patterns. If we feed the bit patters to a printer, we assume that the printer will interpret them as characters; hopefully that is correct. If we feed bit patterns to the CPU, we assume that it will interpret them as instructions. Usually, we keep those bit patterns that we intend to be interpreted as instructions by a CPU separate from those bit patterns we intend to be interpreted as characters, integer or real numbers, sound or images. That is mostly a matter of orderliness. And we cannot always keep a watertight bulkhead between those bit patterns intended for text o
Thank you guys. I’m going to stop bugging you with my questions, at least for now.
-
Thank you guys. I’m going to stop bugging you with my questions, at least for now.
Calin Negru wrote:
I’m going to stop bugging you with my questions, at least for now.
Don't worry! It is nice having someone to ask questions so that the responder is forced to straighten out things in his head, in a way to make it understandable. As long as you can handle somewhat lengthy answers, it is OK with me! :-) When you get around to ask questions about networking, there is a risk that I might provide even longer and a lot more emotional answers. I am spending time nowadays to straighten out why the Intranet Protocol has p**ed me off for 30+ years! When I do that kind of things, I often do it in the form of a lecturer or presenter who tries to explain ideas or principles, and must answer to questions and objections from the the audience. So I must both get the ideas and principles right, and the objections and 'smart' questions. That is really stimulating - trying to understand the good arguments for why IP, say, was created the way it was. (It has been said that Albert Einstein, when as a university professor got into some discussion with graduate students, and of course won it, sometimes told the defeated student: OK, now you take my position to defend, and I will take yours! ... and again, Einstein won the discussion. If is isn't true, it sure is a good lie!)
Religious freedom is the freedom to say that two plus two make five.
-
Calin Negru wrote:
Is a machine instruction a 32 bit word/variable?
This is something I have been fighting since my student days! :-) What resides in a computer isn't "really" numbers. Or characters. Or instructions. Saying that an alphabetic character "really is stored as a number inside the machine" is plain BS! RAM, register and whatever else holds bit patterns, period. Not even zeroes and ones, in any numeric sense. It is charged/uncharged. High/low voltage. High/low current. On/off. Not numbers. When a stream of bits comes out of machine, we may have a convention for presenting e.g. a given sequence of bits as the character 'A'. That is a matter of presentation. Alternately, we may present it as the decimal number 49. This is no more a "true" presentation than 'A'. Or a dark grey dot in a monochrome raster image. If we have agreed upon the semantics of a given byte as an 'A', claiming anything else is simply wrong. The only valid alternative is to treat the byte as an uninterpreted bit string. And that is not as a sequence of numeric 0 and 1, which is an (incorrect) interpretation. A CPU may interpret a bit sequence as an instruction. Presumably, this is also the semantics intended by the compiler generating the bit sequence. The semantics is that of, say, the ALU adding two registers - the operation itself, not a description of it. You may (rightfully) say: "But I cannot do that operation when I read the code". So for readability reasons, we make an (incorrect) presentation, grouping bits by 4 and showing as hexadecimal digits. We may go further, interpreting a number of bits as an index into a string table where we find the name of the operation. This doesn't change the bit sequence into a printable string; it remains a bit pattern, intended for the CPU's interpretation as a set of operations. So it all is bit patterns. If we feed the bit patters to a printer, we assume that the printer will interpret them as characters; hopefully that is correct. If we feed bit patterns to the CPU, we assume that it will interpret them as instructions. Usually, we keep those bit patterns that we intend to be interpreted as instructions by a CPU separate from those bit patterns we intend to be interpreted as characters, integer or real numbers, sound or images. That is mostly a matter of orderliness. And we cannot always keep a watertight bulkhead between those bit patterns intended for text o
>This is something I have been fighting I know it’s an important problem. If you don’t understand that it’s like having a car which has doors that don’t close properly.