Skip to content
  • Categories
  • Recent
  • Tags
  • Popular
  • World
  • Users
  • Groups
Skins
  • Light
  • Cerulean
  • Cosmo
  • Flatly
  • Journal
  • Litera
  • Lumen
  • Lux
  • Materia
  • Minty
  • Morph
  • Pulse
  • Sandstone
  • Simplex
  • Sketchy
  • Spacelab
  • United
  • Yeti
  • Zephyr
  • Dark
  • Cyborg
  • Darkly
  • Quartz
  • Slate
  • Solar
  • Superhero
  • Vapor

  • Default (No Skin)
  • No Skin
Collapse
Code Project
  1. Home
  2. General Programming
  3. Hardware & Devices
  4. Drivers

Drivers

Scheduled Pinned Locked Moved Hardware & Devices
tutorialquestionlounge
22 Posts 5 Posters 13 Views 1 Watching
  • Oldest to Newest
  • Newest to Oldest
  • Most Votes
Reply
  • Reply as topic
Log in to reply
This topic has been deleted. Only users with topic management privileges can see it.
  • C Calin Negru

    I have a question about PC motherboards. What is the purpose of all the chips on a motherboard. Some of them help various slots to function but I’m wondering if there is a hierarchy among them. Is there a top motherboard chip that governs the communication between the CPU and everything else found on the motherboard ( slots, drives, ports etc)? How does the Operating System perform I/O operations? Does he talk directly to the Hard Drive or is the communication mediated by the motherboard? If I want to make my own OS how do I talk with the Hard Drive from Assembly?

    T Offline
    T Offline
    trønderen
    wrote on last edited by
    #13

    There is almost as many answers to this question as there are motherboards :-) Or at least as there are CPU chip generations. Over the years, things have changed dramatically. In very old days, I/O signals could come directly from, or go directly to, the CPU pins. Extra components on the board were essentially for adapting the physical signals - RS-232 signals could go up to +/- 25V, which won't make the CPU chip happy. Gradually, we got support chips that e.g. took care of the entire RS-232 protocol, with timings, bit rate, start/stop bits etc. handled by a separate chip. Similar for the 'LPT' (Line PrintTer) parallel port; it was handled by a separate chip. The CPU usually had a single interrupt line - or possibly two, one non-maskable and one maskable. Soon you would have several interrupt sources, and another MB chip was added: An interrupt controller, with several input lines and internal logic for multiplexing and prioritizing them. Another chip might be a clock circuit. DMA used to be a separate chip. For the 286 CPU, floating point math required a supporting chip (287). Other chips had the memory management (paging, segment handling etc.) done by a separate chip: Adding the MMS chip to the MC68000 (1979) gave it virtual memory capability comparable to the 386 (1985). Not until 68030 (1987) was the MMS logic moved onto the main CPU chip. There were some widespread "standard" support chips for basic things like clocking, interrupts and other basic functions. These were referred to as the chipset for the CPU. We still have that, but nowadays technology allows us to put all the old support functions and then some (quite a few!) onto a single support chip, of the same size and complexity as the CPU itself. Even though it is a single chip, we still call it a 'chipset'. Also, a number of functions essential to the CPU, such as clocking, memory management, cache (which started out being separate chips) were moved onto the CPU chip rather than being included in 'the chipset chip'. You can view the chipset as an extension of the CPU. You may call it the 'top level' MB chip, if you like. In principle, it could have been merged into the main CPU, but for a couple of reasons, it is not: First, it acts as a (de)multiplexer for numerous I/O-devices, each requiring a number of lines / pins. The CPU already has a huge number of pins (rarely under a thousand on modern CPUs, it can be up to two thousand). The CPU speaks to the chipset over an (extremely) fast connection, where it can send/receiv

    C 1 Reply Last reply
    0
    • C Calin Negru

      I have a question about PC motherboards. What is the purpose of all the chips on a motherboard. Some of them help various slots to function but I’m wondering if there is a hierarchy among them. Is there a top motherboard chip that governs the communication between the CPU and everything else found on the motherboard ( slots, drives, ports etc)? How does the Operating System perform I/O operations? Does he talk directly to the Hard Drive or is the communication mediated by the motherboard? If I want to make my own OS how do I talk with the Hard Drive from Assembly?

      T Offline
      T Offline
      trønderen
      wrote on last edited by
      #14

      Calin Negru wrote:

      How does the Operating System perform I/O operations? Does he talk directly to the Hard Drive or is the communication mediated by the motherboard?

      You most definitely go via the motherboard! The OS talks to the chipset, which talks to some I/O-bus - for a disk, it is typically SATA, USB or PCI-e. These present an abstracted view of the disk, making all disks look the same, except for obvious differences such as capacity. At the other (i.e. the disk) end is another piece of logic, mapping the abstract disk onto a concrete, specific one (i.e. mapping a logical address to a surface number, track number, sector number), usually handling a disk cache. In modern disks, the tasks are so complex that they certainly require an embedded processor. USB and PCI-e are both general protocols for various devices, not just disks. Sending a read request and receiving the data, or sending a write request and receiving an OK confirmation is very much like sending a message on the internet: The software prepares a message header and message body according to specified standards, and pass it to the electronics. The response (status, possibly with data read) is received similar to an internet message. Writing a disk block to an USB disk (regardless of the disk type - spinning, USB stick, flash disk, ...) or writing a document to an USB printer is done in similar ways, although the header fields may vary (but standards such as USB define message layouts for a lot of device classes so that all devices of a certain class use similar fields). All protocols are defined as a set of layers: The physical layer is always electronics - things like signal levels, bit rates etc. The bit stream is split into blocks, 'frames', with well defined markers, maybe length fields and maybe checksum for each frame (that varies with protocol); that is the link layer. There may be a processor doing this work, but externally, it appears as if it is hardware. Then, at the network layer, the data field in the frame is filled in with an address at the top, and usually a number of other management fields. For this, some sort of programmed logic (an embedded processor, or dedicated interface logic) is doing the job - but we are still outside the CPU/chipset. The chipset feeds the information to the interface logic, but doesn't address the USB or PCI-e frame as such, or the packet created (within the link frame) by the network layer. Both USB and PCI-e defines

      C 1 Reply Last reply
      0
      • T trønderen

        There is almost as many answers to this question as there are motherboards :-) Or at least as there are CPU chip generations. Over the years, things have changed dramatically. In very old days, I/O signals could come directly from, or go directly to, the CPU pins. Extra components on the board were essentially for adapting the physical signals - RS-232 signals could go up to +/- 25V, which won't make the CPU chip happy. Gradually, we got support chips that e.g. took care of the entire RS-232 protocol, with timings, bit rate, start/stop bits etc. handled by a separate chip. Similar for the 'LPT' (Line PrintTer) parallel port; it was handled by a separate chip. The CPU usually had a single interrupt line - or possibly two, one non-maskable and one maskable. Soon you would have several interrupt sources, and another MB chip was added: An interrupt controller, with several input lines and internal logic for multiplexing and prioritizing them. Another chip might be a clock circuit. DMA used to be a separate chip. For the 286 CPU, floating point math required a supporting chip (287). Other chips had the memory management (paging, segment handling etc.) done by a separate chip: Adding the MMS chip to the MC68000 (1979) gave it virtual memory capability comparable to the 386 (1985). Not until 68030 (1987) was the MMS logic moved onto the main CPU chip. There were some widespread "standard" support chips for basic things like clocking, interrupts and other basic functions. These were referred to as the chipset for the CPU. We still have that, but nowadays technology allows us to put all the old support functions and then some (quite a few!) onto a single support chip, of the same size and complexity as the CPU itself. Even though it is a single chip, we still call it a 'chipset'. Also, a number of functions essential to the CPU, such as clocking, memory management, cache (which started out being separate chips) were moved onto the CPU chip rather than being included in 'the chipset chip'. You can view the chipset as an extension of the CPU. You may call it the 'top level' MB chip, if you like. In principle, it could have been merged into the main CPU, but for a couple of reasons, it is not: First, it acts as a (de)multiplexer for numerous I/O-devices, each requiring a number of lines / pins. The CPU already has a huge number of pins (rarely under a thousand on modern CPUs, it can be up to two thousand). The CPU speaks to the chipset over an (extremely) fast connection, where it can send/receiv

        C Offline
        C Offline
        Calin Negru
        wrote on last edited by
        #15

        I’ve had a good time reading that

        1 Reply Last reply
        0
        • T trønderen

          Calin Negru wrote:

          How does the Operating System perform I/O operations? Does he talk directly to the Hard Drive or is the communication mediated by the motherboard?

          You most definitely go via the motherboard! The OS talks to the chipset, which talks to some I/O-bus - for a disk, it is typically SATA, USB or PCI-e. These present an abstracted view of the disk, making all disks look the same, except for obvious differences such as capacity. At the other (i.e. the disk) end is another piece of logic, mapping the abstract disk onto a concrete, specific one (i.e. mapping a logical address to a surface number, track number, sector number), usually handling a disk cache. In modern disks, the tasks are so complex that they certainly require an embedded processor. USB and PCI-e are both general protocols for various devices, not just disks. Sending a read request and receiving the data, or sending a write request and receiving an OK confirmation is very much like sending a message on the internet: The software prepares a message header and message body according to specified standards, and pass it to the electronics. The response (status, possibly with data read) is received similar to an internet message. Writing a disk block to an USB disk (regardless of the disk type - spinning, USB stick, flash disk, ...) or writing a document to an USB printer is done in similar ways, although the header fields may vary (but standards such as USB define message layouts for a lot of device classes so that all devices of a certain class use similar fields). All protocols are defined as a set of layers: The physical layer is always electronics - things like signal levels, bit rates etc. The bit stream is split into blocks, 'frames', with well defined markers, maybe length fields and maybe checksum for each frame (that varies with protocol); that is the link layer. There may be a processor doing this work, but externally, it appears as if it is hardware. Then, at the network layer, the data field in the frame is filled in with an address at the top, and usually a number of other management fields. For this, some sort of programmed logic (an embedded processor, or dedicated interface logic) is doing the job - but we are still outside the CPU/chipset. The chipset feeds the information to the interface logic, but doesn't address the USB or PCI-e frame as such, or the packet created (within the link frame) by the network layer. Both USB and PCI-e defines

          C Offline
          C Offline
          Calin Negru
          wrote on last edited by
          #16

          What does driver development look like? What language do they use? Do they use machine code? Physically the exchange between the motherboard and an extension board is raw bits. Do they convert that to human readable numbers when they write drivers? Is a machine instruction a 32 bit word/variable? And yet another question. Processor machine instructions. Processor Alu deals with true numbers not machine instructions. The type of operation ( addition, substraction etc.) might be a machine instruction but everything else is just numbers. There are other areas of the processor that are mostly machine instruction oriented. Is that how it works?

          L T 3 Replies Last reply
          0
          • C Calin Negru

            What does driver development look like? What language do they use? Do they use machine code? Physically the exchange between the motherboard and an extension board is raw bits. Do they convert that to human readable numbers when they write drivers? Is a machine instruction a 32 bit word/variable? And yet another question. Processor machine instructions. Processor Alu deals with true numbers not machine instructions. The type of operation ( addition, substraction etc.) might be a machine instruction but everything else is just numbers. There are other areas of the processor that are mostly machine instruction oriented. Is that how it works?

            L Offline
            L Offline
            Lost User
            wrote on last edited by
            #17

            If you go to https://www.intel.com/content/www/us/en/developer/articles/technical/intel-sdm.html[^], you will find the complete details of the Intel architecture, including the assembler instruction set.

            1 Reply Last reply
            0
            • C Calin Negru

              What does driver development look like? What language do they use? Do they use machine code? Physically the exchange between the motherboard and an extension board is raw bits. Do they convert that to human readable numbers when they write drivers? Is a machine instruction a 32 bit word/variable? And yet another question. Processor machine instructions. Processor Alu deals with true numbers not machine instructions. The type of operation ( addition, substraction etc.) might be a machine instruction but everything else is just numbers. There are other areas of the processor that are mostly machine instruction oriented. Is that how it works?

              T Offline
              T Offline
              trønderen
              wrote on last edited by
              #18

              In principle, a driver is very much like another method / function / routine or whatever you call it. It may be written in any language. For all practical purposes, that is any compiled language - for performance/size reasons, interpreted languages are obviously not suited. There is one requirement for bottom level drivers: The language must, one way or the other, allow you to access interface registers, hardware status indicators etc. If you program in assembler (machine code), you have all facilities right at hand. In the old days, all drivers were written in assembler, without the ease of programming provided by medium/high level languages for data structured, flow control etc. So from the 1970s, medium level languages came into use, providing data and flow structures, plus mechanisms for accessing hardware - e.g. by allowing 'inline assembly': Most commonly, a special marker at the start of a source code line told that this line is not a high level statement but an assembly instruction. Usually, variable names in the high level code is available as defined symbols for the assembler instructions, but you must know how to address it (e.g. on the stack, as a static location etc.) in machine code. The transition to medium/high level languages started in the late 1970 / early 1980s. Yet, for many architectures / OSes, with all the old drivers written in assembler, it was often difficult to introduce medium/high level languages for new drivers. Maybe there wasn't even a suitable language available for the given architecture/OS, of which there was plenty in those days. So for established environments, assembler prevailed for many years. I guess that some drivers are written in assembler even today. If the language doesn't provide inline assembler or equivalent, you may write a tiny function in assembler to be called from a high level language. Maybe the function body is a single instruction, but the 'red tape' for handling the call procedures make up dozen instructions. So this is not a very efficient solution, but maybe the only one available. Some compilers provide 'intrinsics': Those are function-looking statements in the high level language, but the compiler knows them and does not generate a function call, but a single machine instruction (or possibly a small handful) right in the instruction flow generated from the surrounding code. E.g. in the MS C++ compiler for ARM, you can generate the vector/array instructions of the CPU by 'calling' an intrinsic with the name of the

              1 Reply Last reply
              0
              • C Calin Negru

                What does driver development look like? What language do they use? Do they use machine code? Physically the exchange between the motherboard and an extension board is raw bits. Do they convert that to human readable numbers when they write drivers? Is a machine instruction a 32 bit word/variable? And yet another question. Processor machine instructions. Processor Alu deals with true numbers not machine instructions. The type of operation ( addition, substraction etc.) might be a machine instruction but everything else is just numbers. There are other areas of the processor that are mostly machine instruction oriented. Is that how it works?

                T Offline
                T Offline
                trønderen
                wrote on last edited by
                #19

                Calin Negru wrote:

                Is a machine instruction a 32 bit word/variable?

                This is something I have been fighting since my student days! :-) What resides in a computer isn't "really" numbers. Or characters. Or instructions. Saying that an alphabetic character "really is stored as a number inside the machine" is plain BS! RAM, register and whatever else holds bit patterns, period. Not even zeroes and ones, in any numeric sense. It is charged/uncharged. High/low voltage. High/low current. On/off. Not numbers. When a stream of bits comes out of machine, we may have a convention for presenting e.g. a given sequence of bits as the character 'A'. That is a matter of presentation. Alternately, we may present it as the decimal number 49. This is no more a "true" presentation than 'A'. Or a dark grey dot in a monochrome raster image. If we have agreed upon the semantics of a given byte as an 'A', claiming anything else is simply wrong. The only valid alternative is to treat the byte as an uninterpreted bit string. And that is not as a sequence of numeric 0 and 1, which is an (incorrect) interpretation. A CPU may interpret a bit sequence as an instruction. Presumably, this is also the semantics intended by the compiler generating the bit sequence. The semantics is that of, say, the ALU adding two registers - the operation itself, not a description of it. You may (rightfully) say: "But I cannot do that operation when I read the code". So for readability reasons, we make an (incorrect) presentation, grouping bits by 4 and showing as hexadecimal digits. We may go further, interpreting a number of bits as an index into a string table where we find the name of the operation. This doesn't change the bit sequence into a printable string; it remains a bit pattern, intended for the CPU's interpretation as a set of operations. So it all is bit patterns. If we feed the bit patters to a printer, we assume that the printer will interpret them as characters; hopefully that is correct. If we feed bit patterns to the CPU, we assume that it will interpret them as instructions. Usually, we keep those bit patterns that we intend to be interpreted as instructions by a CPU separate from those bit patterns we intend to be interpreted as characters, integer or real numbers, sound or images. That is mostly a matter of orderliness. And we cannot always keep a watertight bulkhead between those bit patterns intended for text o

                C 2 Replies Last reply
                0
                • T trønderen

                  Calin Negru wrote:

                  Is a machine instruction a 32 bit word/variable?

                  This is something I have been fighting since my student days! :-) What resides in a computer isn't "really" numbers. Or characters. Or instructions. Saying that an alphabetic character "really is stored as a number inside the machine" is plain BS! RAM, register and whatever else holds bit patterns, period. Not even zeroes and ones, in any numeric sense. It is charged/uncharged. High/low voltage. High/low current. On/off. Not numbers. When a stream of bits comes out of machine, we may have a convention for presenting e.g. a given sequence of bits as the character 'A'. That is a matter of presentation. Alternately, we may present it as the decimal number 49. This is no more a "true" presentation than 'A'. Or a dark grey dot in a monochrome raster image. If we have agreed upon the semantics of a given byte as an 'A', claiming anything else is simply wrong. The only valid alternative is to treat the byte as an uninterpreted bit string. And that is not as a sequence of numeric 0 and 1, which is an (incorrect) interpretation. A CPU may interpret a bit sequence as an instruction. Presumably, this is also the semantics intended by the compiler generating the bit sequence. The semantics is that of, say, the ALU adding two registers - the operation itself, not a description of it. You may (rightfully) say: "But I cannot do that operation when I read the code". So for readability reasons, we make an (incorrect) presentation, grouping bits by 4 and showing as hexadecimal digits. We may go further, interpreting a number of bits as an index into a string table where we find the name of the operation. This doesn't change the bit sequence into a printable string; it remains a bit pattern, intended for the CPU's interpretation as a set of operations. So it all is bit patterns. If we feed the bit patters to a printer, we assume that the printer will interpret them as characters; hopefully that is correct. If we feed bit patterns to the CPU, we assume that it will interpret them as instructions. Usually, we keep those bit patterns that we intend to be interpreted as instructions by a CPU separate from those bit patterns we intend to be interpreted as characters, integer or real numbers, sound or images. That is mostly a matter of orderliness. And we cannot always keep a watertight bulkhead between those bit patterns intended for text o

                  C Offline
                  C Offline
                  Calin Negru
                  wrote on last edited by
                  #20

                  Thank you guys. I’m going to stop bugging you with my questions, at least for now.

                  T 1 Reply Last reply
                  0
                  • C Calin Negru

                    Thank you guys. I’m going to stop bugging you with my questions, at least for now.

                    T Offline
                    T Offline
                    trønderen
                    wrote on last edited by
                    #21

                    Calin Negru wrote:

                    I’m going to stop bugging you with my questions, at least for now.

                    Don't worry! It is nice having someone to ask questions so that the responder is forced to straighten out things in his head, in a way to make it understandable. As long as you can handle somewhat lengthy answers, it is OK with me! :-) When you get around to ask questions about networking, there is a risk that I might provide even longer and a lot more emotional answers. I am spending time nowadays to straighten out why the Intranet Protocol has p**ed me off for 30+ years! When I do that kind of things, I often do it in the form of a lecturer or presenter who tries to explain ideas or principles, and must answer to questions and objections from the the audience. So I must both get the ideas and principles right, and the objections and 'smart' questions. That is really stimulating - trying to understand the good arguments for why IP, say, was created the way it was. (It has been said that Albert Einstein, when as a university professor got into some discussion with graduate students, and of course won it, sometimes told the defeated student: OK, now you take my position to defend, and I will take yours! ... and again, Einstein won the discussion. If is isn't true, it sure is a good lie!)

                    Religious freedom is the freedom to say that two plus two make five.

                    1 Reply Last reply
                    0
                    • T trønderen

                      Calin Negru wrote:

                      Is a machine instruction a 32 bit word/variable?

                      This is something I have been fighting since my student days! :-) What resides in a computer isn't "really" numbers. Or characters. Or instructions. Saying that an alphabetic character "really is stored as a number inside the machine" is plain BS! RAM, register and whatever else holds bit patterns, period. Not even zeroes and ones, in any numeric sense. It is charged/uncharged. High/low voltage. High/low current. On/off. Not numbers. When a stream of bits comes out of machine, we may have a convention for presenting e.g. a given sequence of bits as the character 'A'. That is a matter of presentation. Alternately, we may present it as the decimal number 49. This is no more a "true" presentation than 'A'. Or a dark grey dot in a monochrome raster image. If we have agreed upon the semantics of a given byte as an 'A', claiming anything else is simply wrong. The only valid alternative is to treat the byte as an uninterpreted bit string. And that is not as a sequence of numeric 0 and 1, which is an (incorrect) interpretation. A CPU may interpret a bit sequence as an instruction. Presumably, this is also the semantics intended by the compiler generating the bit sequence. The semantics is that of, say, the ALU adding two registers - the operation itself, not a description of it. You may (rightfully) say: "But I cannot do that operation when I read the code". So for readability reasons, we make an (incorrect) presentation, grouping bits by 4 and showing as hexadecimal digits. We may go further, interpreting a number of bits as an index into a string table where we find the name of the operation. This doesn't change the bit sequence into a printable string; it remains a bit pattern, intended for the CPU's interpretation as a set of operations. So it all is bit patterns. If we feed the bit patters to a printer, we assume that the printer will interpret them as characters; hopefully that is correct. If we feed bit patterns to the CPU, we assume that it will interpret them as instructions. Usually, we keep those bit patterns that we intend to be interpreted as instructions by a CPU separate from those bit patterns we intend to be interpreted as characters, integer or real numbers, sound or images. That is mostly a matter of orderliness. And we cannot always keep a watertight bulkhead between those bit patterns intended for text o

                      C Offline
                      C Offline
                      Calin Negru
                      wrote on last edited by
                      #22

                      >This is something I have been fighting I know it’s an important problem. If you don’t understand that it’s like having a car which has doors that don’t close properly.

                      1 Reply Last reply
                      0
                      Reply
                      • Reply as topic
                      Log in to reply
                      • Oldest to Newest
                      • Newest to Oldest
                      • Most Votes


                      • Login

                      • Don't have an account? Register

                      • Login or register to search.
                      • First post
                        Last post
                      0
                      • Categories
                      • Recent
                      • Tags
                      • Popular
                      • World
                      • Users
                      • Groups