Skip to content
  • Categories
  • Recent
  • Tags
  • Popular
  • World
  • Users
  • Groups
Skins
  • Light
  • Cerulean
  • Cosmo
  • Flatly
  • Journal
  • Litera
  • Lumen
  • Lux
  • Materia
  • Minty
  • Morph
  • Pulse
  • Sandstone
  • Simplex
  • Sketchy
  • Spacelab
  • United
  • Yeti
  • Zephyr
  • Dark
  • Cyborg
  • Darkly
  • Quartz
  • Slate
  • Solar
  • Superhero
  • Vapor

  • Default (No Skin)
  • No Skin
Collapse
Code Project
  1. Home
  2. The Lounge
  3. A basic model for how a CPU works

A basic model for how a CPU works

Scheduled Pinned Locked Moved The Lounge
regexquestiondiscussion
24 Posts 10 Posters 0 Views 1 Watching
  • Oldest to Newest
  • Newest to Oldest
  • Most Votes
Reply
  • Reply as topic
Log in to reply
This topic has been deleted. Only users with topic management privileges can see it.
  • C Offline
    C Offline
    Calin Negru
    wrote on last edited by
    #1

    From what I understand there are several types of components in a processor: in some components the data is persistent in time , it survives more than one or several CPU pulses. My guess is the registers fall in this category. Another category is the transistors, the data from the transistors is flushed when the oscillator briefly cuts the power off. In terms of a classical 8 bit processor (the modern processors have all the bells and whistles which makes them difficult to understand) it takes one current pulse to process one line of assembly code. When the transistor web is flooded with current math takes place and the result ends up in the registers, when the next flood takes place the current flows through transistors in a pattern dictated by the next line of ASM code picking up data saved in registers in previous floods while doing so How accurate is this? I’m bringing a 8 bit processor into discussion not because I’m sure how it works but because it should be simple compared to the other ones.

    P J O E B 6 Replies Last reply
    0
    • C Calin Negru

      From what I understand there are several types of components in a processor: in some components the data is persistent in time , it survives more than one or several CPU pulses. My guess is the registers fall in this category. Another category is the transistors, the data from the transistors is flushed when the oscillator briefly cuts the power off. In terms of a classical 8 bit processor (the modern processors have all the bells and whistles which makes them difficult to understand) it takes one current pulse to process one line of assembly code. When the transistor web is flooded with current math takes place and the result ends up in the registers, when the next flood takes place the current flows through transistors in a pattern dictated by the next line of ASM code picking up data saved in registers in previous floods while doing so How accurate is this? I’m bringing a 8 bit processor into discussion not because I’m sure how it works but because it should be simple compared to the other ones.

      P Offline
      P Offline
      PIEBALDconsult
      wrote on last edited by
      #2

      I recommend reading "Code" by Charles Petzold

      C 1 Reply Last reply
      0
      • C Calin Negru

        From what I understand there are several types of components in a processor: in some components the data is persistent in time , it survives more than one or several CPU pulses. My guess is the registers fall in this category. Another category is the transistors, the data from the transistors is flushed when the oscillator briefly cuts the power off. In terms of a classical 8 bit processor (the modern processors have all the bells and whistles which makes them difficult to understand) it takes one current pulse to process one line of assembly code. When the transistor web is flooded with current math takes place and the result ends up in the registers, when the next flood takes place the current flows through transistors in a pattern dictated by the next line of ASM code picking up data saved in registers in previous floods while doing so How accurate is this? I’m bringing a 8 bit processor into discussion not because I’m sure how it works but because it should be simple compared to the other ones.

        J Offline
        J Offline
        Jeremy Falcon
        wrote on last edited by
        #3

        I wish I could help answer, but the best I can do is verify that one CPU cycle does equate to one machine instruction / ASM mnemonic. How registers store values across cycles is beyond me, but an interesting idea to think about.

        Jeremy Falcon

        O 1 Reply Last reply
        0
        • C Calin Negru

          From what I understand there are several types of components in a processor: in some components the data is persistent in time , it survives more than one or several CPU pulses. My guess is the registers fall in this category. Another category is the transistors, the data from the transistors is flushed when the oscillator briefly cuts the power off. In terms of a classical 8 bit processor (the modern processors have all the bells and whistles which makes them difficult to understand) it takes one current pulse to process one line of assembly code. When the transistor web is flooded with current math takes place and the result ends up in the registers, when the next flood takes place the current flows through transistors in a pattern dictated by the next line of ASM code picking up data saved in registers in previous floods while doing so How accurate is this? I’m bringing a 8 bit processor into discussion not because I’m sure how it works but because it should be simple compared to the other ones.

          O Offline
          O Offline
          obermd
          wrote on last edited by
          #4

          Way, way off when talking about modern CPUs.

          C 1 Reply Last reply
          0
          • J Jeremy Falcon

            I wish I could help answer, but the best I can do is verify that one CPU cycle does equate to one machine instruction / ASM mnemonic. How registers store values across cycles is beyond me, but an interesting idea to think about.

            Jeremy Falcon

            O Offline
            O Offline
            obermd
            wrote on last edited by
            #5

            Nope. With modern CPUs it's actually possible for an instruction to be "ignored" because the execution preprocessor, which runs in parallel to the actual instruction execution unit, realizes that the instruction is an effective NOP. An example of this would be PUSH AX, POP AX, which older processors would dutifully execute and newer processors would simply cut out of the execution stream. Also, the vast majority of processor instructions, even on a RISC machine, take more than one clock cycle.

            T J C K 4 Replies Last reply
            0
            • C Calin Negru

              From what I understand there are several types of components in a processor: in some components the data is persistent in time , it survives more than one or several CPU pulses. My guess is the registers fall in this category. Another category is the transistors, the data from the transistors is flushed when the oscillator briefly cuts the power off. In terms of a classical 8 bit processor (the modern processors have all the bells and whistles which makes them difficult to understand) it takes one current pulse to process one line of assembly code. When the transistor web is flooded with current math takes place and the result ends up in the registers, when the next flood takes place the current flows through transistors in a pattern dictated by the next line of ASM code picking up data saved in registers in previous floods while doing so How accurate is this? I’m bringing a 8 bit processor into discussion not because I’m sure how it works but because it should be simple compared to the other ones.

              E Offline
              E Offline
              englebart
              wrote on last edited by
              #6

              There is a constant DC charge across the whole chip, so the clock is not like the tide going in and out. The clock is more like the wave on top of a deep body of water. Start with digital logic and logic gates. There used to be some software “work benches” where you could wire up circuits.

              J 1 Reply Last reply
              0
              • O obermd

                Nope. With modern CPUs it's actually possible for an instruction to be "ignored" because the execution preprocessor, which runs in parallel to the actual instruction execution unit, realizes that the instruction is an effective NOP. An example of this would be PUSH AX, POP AX, which older processors would dutifully execute and newer processors would simply cut out of the execution stream. Also, the vast majority of processor instructions, even on a RISC machine, take more than one clock cycle.

                T Offline
                T Offline
                trønderen
                wrote on last edited by
                #7

                obermd wrote:

                Also, the vast majority of processor instructions, even on a RISC machine, take more than one clock cycle.

                But then again, thanks to pipelining, over time the average number of instructions executed per cycle is often close to one. Techniques such as 'hyperthreading' could even give you more than one instruction per cycle ... on the average, over time. One consequence of all these speedup techniques, from pipelining/hyperthreading to speculative execution and extensive operand prefetching, is that the cost of an interrupt goes up and up, the more fancy the CPUs become. Some of it is (semi)hidden, e.g. when after interrupt handling you may have to redo the prefetch that was already done before the interrupt. The interrupt cost is more than the time from acknowledge of the interrupt signal to the handler return. You also have to count the total delays of the interrupted instruction stream, where several instructions might be affected. At least early ARM processors were much closer to a 'direct' clock cycle to instruction relationship; its much more 'tidy' instruction set would allow it (and the gate count restrictions wouldn't allow all those fancy speedup techniques). This is compared to the x86 architecture and its derivatives, where you have to spend a million transistors on such functions to make the CPU fast enough. Since the early ARMs, that architecture has been extended and extended and extended and ... Now it is so multi-extended that I feel I can only scratch the surface of the architecture. It probably, and hopefully, isn't (yet?) as messy as the X86 derivatives, but I am not one to tell. I fear the worst ... I have a nagging feeling that if it was possible to start completely from scratch, it would be possible to build equally fast processors that didn't take a few billion transistors to realize. (Yes, I know that a fair share of those few billions go into the quite regular CPU caches - but those are part of the speedup expenses, too!) ARM was sort of 'a fresh start', but that is long ago. Multicore 64 bit ARM CPUs are quite different from those meant to replace 8051s ... I haven't had the time to look at RISC-V in detail yet; maybe that is another 'new start', not for replacing 8051, but aware of gigabyte RAM banks, 64 bit data and other modern requirement. I am crossing my fingers :-)

                1 Reply Last reply
                0
                • O obermd

                  Nope. With modern CPUs it's actually possible for an instruction to be "ignored" because the execution preprocessor, which runs in parallel to the actual instruction execution unit, realizes that the instruction is an effective NOP. An example of this would be PUSH AX, POP AX, which older processors would dutifully execute and newer processors would simply cut out of the execution stream. Also, the vast majority of processor instructions, even on a RISC machine, take more than one clock cycle.

                  J Offline
                  J Offline
                  Jeremy Falcon
                  wrote on last edited by
                  #8

                  As you can probably tell, my info on this is a bit dated. Good to know though.

                  obermd wrote:

                  Also, the vast majority of processor instructions, even on a RISC machine, take more than one clock cycle.

                  Do you mean for just one core? I was under the impression a single core still executes instructions one at a time.

                  Jeremy Falcon

                  1 Reply Last reply
                  0
                  • E englebart

                    There is a constant DC charge across the whole chip, so the clock is not like the tide going in and out. The clock is more like the wave on top of a deep body of water. Start with digital logic and logic gates. There used to be some software “work benches” where you could wire up circuits.

                    J Offline
                    J Offline
                    jmaida
                    wrote on last edited by
                    #9

                    Back in the day 1970-80's, in grad school, we had to implement in software, a simulation of the hardware to execute CPU instructions such as division and multiplication. Sort of microcode. RISC was the latest and greatest back then, so our exercise was for such a processor. It was a bear of a project, at first, but we experienced compound learning, like compound interest. CPU's can be both complex and simple.

                    "A little time, a little trouble, your better day" Badfinger

                    E K 2 Replies Last reply
                    0
                    • C Calin Negru

                      From what I understand there are several types of components in a processor: in some components the data is persistent in time , it survives more than one or several CPU pulses. My guess is the registers fall in this category. Another category is the transistors, the data from the transistors is flushed when the oscillator briefly cuts the power off. In terms of a classical 8 bit processor (the modern processors have all the bells and whistles which makes them difficult to understand) it takes one current pulse to process one line of assembly code. When the transistor web is flooded with current math takes place and the result ends up in the registers, when the next flood takes place the current flows through transistors in a pattern dictated by the next line of ASM code picking up data saved in registers in previous floods while doing so How accurate is this? I’m bringing a 8 bit processor into discussion not because I’m sure how it works but because it should be simple compared to the other ones.

                      B Offline
                      B Offline
                      BillWoodruff
                      wrote on last edited by
                      #10

                      all your assumptions are both correct, and are over-simplifications of what modern CPU's are, and do. maybe do some reading on Von Neuman architecture [^], and what a Turing Machine is [^].

                      «The mind is not a vessel to be filled but a fire to be kindled» Plutarch

                      C 1 Reply Last reply
                      0
                      • C Calin Negru

                        From what I understand there are several types of components in a processor: in some components the data is persistent in time , it survives more than one or several CPU pulses. My guess is the registers fall in this category. Another category is the transistors, the data from the transistors is flushed when the oscillator briefly cuts the power off. In terms of a classical 8 bit processor (the modern processors have all the bells and whistles which makes them difficult to understand) it takes one current pulse to process one line of assembly code. When the transistor web is flooded with current math takes place and the result ends up in the registers, when the next flood takes place the current flows through transistors in a pattern dictated by the next line of ASM code picking up data saved in registers in previous floods while doing so How accurate is this? I’m bringing a 8 bit processor into discussion not because I’m sure how it works but because it should be simple compared to the other ones.

                        K Offline
                        K Offline
                        Keith Barrow
                        wrote on last edited by
                        #11

                        Doesn't exactly anwser your question, but Ben Eater has a series of videos where he builds an 8-bit computer on breaboard starting with a chip [Ben Eater Build an 8-bit computer from scratch](https://eater.net/8bit) Gives a lot of insight into the workings, there are a couple of specific videos below, but the whole thing is fascinating. The first thing is the CPU is powered - the microchip has a ground and dc positive pin. If you think about it interms of electronics, what is a "0" or "1": 0 is easy, it's ground voltage, but 1 needs to be voltage close to a reference - which the +ve provides. This dc reference voltage also provides the power to keep the values in the register. There isn't any flushing - the Clock (which is just a square wave) just gets the CPU to cycle. [“Hello, world” from scratch on a 6502 — Part 1 - YouTube](https://www.youtube.com/watch?v=LnzuMJLZRdU) This video tells you how CPUs actually execute machine code [How do CPUs read machine code? — 6502 part 2 - YouTube](https://www.youtube.com/watch?v=yl8vPW5hydQ) Hope this helps - the videos are right on the edge of electronics / programming. Bill Woodruff's suggestion about the von Neumann architecture and Turing machines is excellent, it explains how we got to 8 bit chips

                        KeithBarrow.net[^] - It might not be very good, but at least it is free!

                        1 Reply Last reply
                        0
                        • B BillWoodruff

                          all your assumptions are both correct, and are over-simplifications of what modern CPU's are, and do. maybe do some reading on Von Neuman architecture [^], and what a Turing Machine is [^].

                          «The mind is not a vessel to be filled but a fire to be kindled» Plutarch

                          C Offline
                          C Offline
                          Calin Negru
                          wrote on last edited by
                          #12

                          Thanks Bill

                          1 Reply Last reply
                          0
                          • O obermd

                            Nope. With modern CPUs it's actually possible for an instruction to be "ignored" because the execution preprocessor, which runs in parallel to the actual instruction execution unit, realizes that the instruction is an effective NOP. An example of this would be PUSH AX, POP AX, which older processors would dutifully execute and newer processors would simply cut out of the execution stream. Also, the vast majority of processor instructions, even on a RISC machine, take more than one clock cycle.

                            C Offline
                            C Offline
                            Calin Negru
                            wrote on last edited by
                            #13

                            > even on a RISC machine take more than one clock cycle Which means that either the data is split in two (or more) pieces and the pieces pass through ALU one at a time or if the type of operation requires the result from a pass gets placed at the ALU entry point and the data receives one more pass

                            K 1 Reply Last reply
                            0
                            • O obermd

                              Way, way off when talking about modern CPUs.

                              C Offline
                              C Offline
                              Calin Negru
                              wrote on last edited by
                              #14

                              like I thought

                              1 Reply Last reply
                              0
                              • P PIEBALDconsult

                                I recommend reading "Code" by Charles Petzold

                                C Offline
                                C Offline
                                Calin Negru
                                wrote on last edited by
                                #15

                                Thank you for your tip

                                1 Reply Last reply
                                0
                                • J jmaida

                                  Back in the day 1970-80's, in grad school, we had to implement in software, a simulation of the hardware to execute CPU instructions such as division and multiplication. Sort of microcode. RISC was the latest and greatest back then, so our exercise was for such a processor. It was a bear of a project, at first, but we experienced compound learning, like compound interest. CPU's can be both complex and simple.

                                  "A little time, a little trouble, your better day" Badfinger

                                  E Offline
                                  E Offline
                                  englebart
                                  wrote on last edited by
                                  #16

                                  Circa 87-88 we used a software package to do the same. We used it to build classics like Mark I, Eniac, etc Then you had to write a small assembly program to run on it. Hats off to some of the people that made real programs on the real hardware with a handful of op codes. I think it took about 30-40 op codes of self modifying code to sum an array.

                                  J K 2 Replies Last reply
                                  0
                                  • E englebart

                                    Circa 87-88 we used a software package to do the same. We used it to build classics like Mark I, Eniac, etc Then you had to write a small assembly program to run on it. Hats off to some of the people that made real programs on the real hardware with a handful of op codes. I think it took about 30-40 op codes of self modifying code to sum an array.

                                    J Offline
                                    J Offline
                                    jmaida
                                    wrote on last edited by
                                    #17

                                    Yeah, we also had to write a simple compiler to create the homemade assembly. The course required 2 semesters. Loved it though. Learned so much.

                                    "A little time, a little trouble, your better day" Badfinger

                                    1 Reply Last reply
                                    0
                                    • J jmaida

                                      Back in the day 1970-80's, in grad school, we had to implement in software, a simulation of the hardware to execute CPU instructions such as division and multiplication. Sort of microcode. RISC was the latest and greatest back then, so our exercise was for such a processor. It was a bear of a project, at first, but we experienced compound learning, like compound interest. CPU's can be both complex and simple.

                                      "A little time, a little trouble, your better day" Badfinger

                                      K Offline
                                      K Offline
                                      kalberts
                                      wrote on last edited by
                                      #18

                                      In my student days - this was in 1979 - one of the lab exercises were with an AMD 2901 Evaluation Kit. The 2901 was a 4-bit "bit slice" ALU, with carry in and out, so you could hook two of them together for an 8-bit machine, four for a 16-bit or 8 for a 32-bit. We had only a single one. With the ALU came a memory for 64 words of 16 bit microcode: Flip 16 switches, press Deposit, flip again, press Deposit ... 64 times to fill the entire microcode memory. We hooked up each of the 16 bits to the control lines for the ALU: Load accumulator from bus, dump accumulator to bus ... actually, today I have only a vague memory of what the control signals were. The 'sequencer' was a separate chip that selected one word from microcode RAM, transferring it to the ALU control inputs. It had a microcode address counter; one of the control signals incremented this counter. We did succeed in microcoding an instruction for reading four switches (the "bus") as data, adding another 4 bit value, and display the result on 4 LEDs (plus one for the carry line). This was an exceptionally valuable lab exercize to learn what (an extremely simpified) CPU is like in its very basic mechanisms. If 2901 Evaluation Kits were still on the market, I would recommend it to anyone who wants a true hands on experience with a CPU. (If you happen to find one on eBay: Be prepared to do some thorough studying of the ALU before trying to microcode it; microcoding is not to be don on intuition!) Of course: Anything like the 2901 kit can teach you only the basic techniques of simple, unsophisticated computers, the way they were built in the old days. I see other people refer to 'modern' CPUs as if they have little to do with what an evaluation kit can teach you - but you can immediately forget jumping directly onto a 'modern' CPU. It is so complex, contains so many fancy tricks for speeding it up, that you will be be completely blown down. Better start with something that you have a chance to really understand, and then add the fancy techniques one by one. If you get as far as to thoroughly understand even a third of them, you will be qualified as Chief Engineer at AMD or Intel :-) Or, to phrase it differently: Don't expect to understand the fancy techniques. You may get as far as to understand what they want to achieve, but don't expect to understand how.

                                      J 1 Reply Last reply
                                      0
                                      • O obermd

                                        Nope. With modern CPUs it's actually possible for an instruction to be "ignored" because the execution preprocessor, which runs in parallel to the actual instruction execution unit, realizes that the instruction is an effective NOP. An example of this would be PUSH AX, POP AX, which older processors would dutifully execute and newer processors would simply cut out of the execution stream. Also, the vast majority of processor instructions, even on a RISC machine, take more than one clock cycle.

                                        K Offline
                                        K Offline
                                        kalberts
                                        wrote on last edited by
                                        #19

                                        obermd wrote:

                                        An example of this would be PUSH AX, POP AX, which older processors would dutifully execute and newer processors would simply cut out of the execution stream.

                                        I wasn't aware of this optimization. What surprises me, though, is that compilers let anything like this through their optimization stages at the code generating level. More specifically: That code generators lets such things through so frequently that it justifies CPU mechanisms to compensate for the lack of compile time optimzation. I guess it takes quite a handful of gates to analyze the instruction stream to identify such noop sequences and pick them out of the instruction stream. I was working (at instruction level, not hardware) on a machine which in its first version had a few hardware optimization for special cases, that were removed in later versions: The special cases occured so rarely that on a given gate budget (which you always have when designing a CPU), you could gain a much larger general speedup by spending your gates in other parts. Do you have specific examples of CPUs that eliminate 'noop sequences' like the one you describe? (Preferably with link to documentation.)

                                        1 Reply Last reply
                                        0
                                        • C Calin Negru

                                          > even on a RISC machine take more than one clock cycle Which means that either the data is split in two (or more) pieces and the pieces pass through ALU one at a time or if the type of operation requires the result from a pass gets placed at the ALU entry point and the data receives one more pass

                                          K Offline
                                          K Offline
                                          kalberts
                                          wrote on last edited by
                                          #20

                                          I guess that obermd is essentially referring to various effects of pipelining. Each individual instruction may spend 5-6 (or more) clock cycles total in various stages of processing: Instruction fetch, instruction decoding, operand fetch, *instruction execution*, storing results, ... The *instruction execution* (e.g. adding) is done in a single clock cycle, but that is only a part of the processing. In parallell with this instruction doing its add (say), the previous instruction is stuffing away its results, the next instruction is having operands fetched, the one two steps behind is being decoded and the one three steps behind is being fetched. So the CPU may be doing one add (say) per cycle, and in the same cycle one of each of the other operations, on different instructions. For one given instruction, it takes a number of cycles to have all the different processing steps done, though.

                                          C 1 Reply Last reply
                                          0
                                          Reply
                                          • Reply as topic
                                          Log in to reply
                                          • Oldest to Newest
                                          • Newest to Oldest
                                          • Most Votes


                                          • Login

                                          • Don't have an account? Register

                                          • Login or register to search.
                                          • First post
                                            Last post
                                          0
                                          • Categories
                                          • Recent
                                          • Tags
                                          • Popular
                                          • World
                                          • Users
                                          • Groups