Skip to content
  • Categories
  • Recent
  • Tags
  • Popular
  • World
  • Users
  • Groups
Skins
  • Light
  • Cerulean
  • Cosmo
  • Flatly
  • Journal
  • Litera
  • Lumen
  • Lux
  • Materia
  • Minty
  • Morph
  • Pulse
  • Sandstone
  • Simplex
  • Sketchy
  • Spacelab
  • United
  • Yeti
  • Zephyr
  • Dark
  • Cyborg
  • Darkly
  • Quartz
  • Slate
  • Solar
  • Superhero
  • Vapor

  • Default (No Skin)
  • No Skin
Collapse
Code Project
  1. Home
  2. The Lounge
  3. Utilization

Utilization

Scheduled Pinned Locked Moved The Lounge
graphicsdesignasp-netcomsysadmin
46 Posts 20 Posters 3 Views 1 Watching
  • Oldest to Newest
  • Newest to Oldest
  • Most Votes
Reply
  • Reply as topic
Log in to reply
This topic has been deleted. Only users with topic management privileges can see it.
  • D dandy72

    Sure. Cache hit is a thing, and so is cache miss. It doesn't mean we shouldn't try to cache anything at all. Just that the algorithm used to decide what to cache vs what to let go of is very much something that's still in development. I'm not aware of any magic bullet.

    honey the codewitchH Offline
    honey the codewitchH Offline
    honey the codewitch
    wrote on last edited by
    #21

    There isn't really. It's all highly situational. For example, I do dithering and automatic color matching in my graphics library so that I can load a full color JPG on to for example, a 7-color e-paper display. It will match any red it gets, with the nearest red that the e-paper can support and then if possible, dither it with another color to get it closer. It takes time. I cache the color matching and dithering results in a hash table as I load the page. The hit rate is extremely high. It's very rare that a pixel of a particular color only appears once. That's close to ideal. The cache is discarded all at once once the frame is rendered. In that case, also easy to determine. Naturally, for a web site, things look much different, and considerations change. Your cache hit algo probably won't be as ideal as my previous example just because there are so few examples in life that closely match a general algorithm's design. At the end of the day though, you don't need a silver bullet to make it worthwhile, luckily for us - you just need to win more than you lose, once all the chips are counted.

    Check out my IoT graphics library here: https://honeythecodewitch.com/gfx And my IoT UI/User Experience library here: https://honeythecodewitch.com/uix

    D 1 Reply Last reply
    0
    • honey the codewitchH honey the codewitch

      There isn't really. It's all highly situational. For example, I do dithering and automatic color matching in my graphics library so that I can load a full color JPG on to for example, a 7-color e-paper display. It will match any red it gets, with the nearest red that the e-paper can support and then if possible, dither it with another color to get it closer. It takes time. I cache the color matching and dithering results in a hash table as I load the page. The hit rate is extremely high. It's very rare that a pixel of a particular color only appears once. That's close to ideal. The cache is discarded all at once once the frame is rendered. In that case, also easy to determine. Naturally, for a web site, things look much different, and considerations change. Your cache hit algo probably won't be as ideal as my previous example just because there are so few examples in life that closely match a general algorithm's design. At the end of the day though, you don't need a silver bullet to make it worthwhile, luckily for us - you just need to win more than you lose, once all the chips are counted.

      Check out my IoT graphics library here: https://honeythecodewitch.com/gfx And my IoT UI/User Experience library here: https://honeythecodewitch.com/uix

      D Offline
      D Offline
      dandy72
      wrote on last edited by
      #22

      honey the codewitch wrote:

      At the end of the day though, you don't need a silver bullet to make it worthwhile, luckily for us - you just need to win more than you lose, once all the chips are counted.

      This. So much this.

      1 Reply Last reply
      0
      • honey the codewitchH honey the codewitch

        Edit: To be clear I'm talking about user facing machines rather than server or embedded, and a hypothetical ideal. In practice CPUs need about 10% off the top to keep their scheduler working, for example, and there are a lot of details I'm glossing over in this post, so it would be a good idea to read the comments before replying. There has been a lot of ground covered since. When your CPU core(s) aren't performing tasks, they are idle hands. When your RAM is not allocated, it's doing no useful work. (Still drawing power though!) While your I/O was idle, it could have been preloading something for you. I see people complain about resource utilization in modern applications, and I can't help but think of the above. RAM does not work like non-volatile storage in that it's best to keep some free space available. Frankly, in an ideal world, your RAM allocation would always be 100% Assuming your machine is performing any work at all (and not just idling) ideally it would do so utilizing the entire CPU, so it could complete quickly. Assuming you're going to be using your machine in the near future, your I/O may be sitting idle, but ideally it would be preloading things you were planning to use, so it could launch faster. My point is this: Utilization is a good thing, in many if not most cases. What's that old saw? Idle hands are the devil's playground. Your computer is like that. I like to see my CPU work hard when it works at all. I like to see my RAM utilization be *at least* half even at idle. I like to see my storage ticking away a bit in the background, doing its lazy writes. This means my computer isn't wasting my time. Just sayin'

        Check out my IoT graphics library here: https://honeythecodewitch.com/gfx And my IoT UI/User Experience library here: https://honeythecodewitch.com/uix

        M Offline
        M Offline
        Marc Clifton
        wrote on last edited by
        #23

        honey the codewitch wrote:

        When your CPU core(s) aren't performing tasks, they are idle hands. When your RAM is not allocated, it's doing no useful work. (Still drawing power though!) While your I/O was idle, it could have been preloading something for you.

        Sounds like the wife complaining about her hubby. :laugh:

        Latest Articles:
        A Lightweight Thread Safe In-Memory Keyed Generic Cache Collection Service A Dynamic Where Implementation for Entity Framework

        1 Reply Last reply
        0
        • honey the codewitchH honey the codewitch

          Edit: To be clear I'm talking about user facing machines rather than server or embedded, and a hypothetical ideal. In practice CPUs need about 10% off the top to keep their scheduler working, for example, and there are a lot of details I'm glossing over in this post, so it would be a good idea to read the comments before replying. There has been a lot of ground covered since. When your CPU core(s) aren't performing tasks, they are idle hands. When your RAM is not allocated, it's doing no useful work. (Still drawing power though!) While your I/O was idle, it could have been preloading something for you. I see people complain about resource utilization in modern applications, and I can't help but think of the above. RAM does not work like non-volatile storage in that it's best to keep some free space available. Frankly, in an ideal world, your RAM allocation would always be 100% Assuming your machine is performing any work at all (and not just idling) ideally it would do so utilizing the entire CPU, so it could complete quickly. Assuming you're going to be using your machine in the near future, your I/O may be sitting idle, but ideally it would be preloading things you were planning to use, so it could launch faster. My point is this: Utilization is a good thing, in many if not most cases. What's that old saw? Idle hands are the devil's playground. Your computer is like that. I like to see my CPU work hard when it works at all. I like to see my RAM utilization be *at least* half even at idle. I like to see my storage ticking away a bit in the background, doing its lazy writes. This means my computer isn't wasting my time. Just sayin'

          Check out my IoT graphics library here: https://honeythecodewitch.com/gfx And my IoT UI/User Experience library here: https://honeythecodewitch.com/uix

          O Offline
          O Offline
          obermd
          wrote on last edited by
          #24

          Anything above 80-85% utilization will quickly start thrashing that particular resource. Up to that point you're spot on.

          honey the codewitchH 1 Reply Last reply
          0
          • O obermd

            Anything above 80-85% utilization will quickly start thrashing that particular resource. Up to that point you're spot on.

            honey the codewitchH Offline
            honey the codewitchH Offline
            honey the codewitch
            wrote on last edited by
            #25

            I wouldn't say *anything*, but I do hear you. Certainly thrashing is a concern with something like virtual memory, but I'm not even necessarily talking about vmem here. With the memory example, my point was simply about a hypothetical ideal. It takes the same amount of power to run 32GB of allocated memory as it does 32GB of unallocated memory, so if you're not using that memory for something, it's in effect, being wasted. In the standard case, this would be an OS responsibility, and if an OS wanted to approach that ideal, it might use something, like an internal ramdisk to preload commonly used apps and data for example. May as well. It's not being used for anything else, and if you run out, you just start dumping all your ramdisk. Only after it's gone, start going to vmem. Something like that. It's just an idea, there are a million ways to use RAM. I/O (to storage) is really where your thrashing occurs, and historically there was literal thrashing due to the moving parts involved, even though that's so often not the case anymore. But again, the idea would be in an ideal "typical" situation, an OS would manage that, and run any preloads at idle time, and make them lower priority than anything else. In effect, as long as everything you're doing on top of idling is basically "disposable" thrashing won't be much of a concern. The CPU is a bit of an animal, in that you'll need about 10% of it to run the scheduler effectively, and without that, everything else falls apart. So yeah, with a CPU it's more like 80-90% utilization, although 100% is acceptable for bursts. In any case, I worded my post carefully to dictate that the CPU should be utilized when it has something to do. It's not the case that I'd necessarily want to "find" things to do with it the way I would with RAM. It's that when it does need to do something, it expands like a lil puffer fish and uses all of its threading power toward a task - again, ideal scenario. The reason for the discrepancy here, vs say with RAM is because of power concerns. RAM uses the same power regardless. A CPU varies with task so it should be allowed to idle if that makes sense. I hope this clears things up rather than making it worse. :laugh:

            Check out my IoT graphics library here: https://honeythecodewitch.com/gfx And my IoT UI/User Experience library here: https://honeythecodewitch.com/uix

            W 1 Reply Last reply
            0
            • honey the codewitchH honey the codewitch

              Edit: To be clear I'm talking about user facing machines rather than server or embedded, and a hypothetical ideal. In practice CPUs need about 10% off the top to keep their scheduler working, for example, and there are a lot of details I'm glossing over in this post, so it would be a good idea to read the comments before replying. There has been a lot of ground covered since. When your CPU core(s) aren't performing tasks, they are idle hands. When your RAM is not allocated, it's doing no useful work. (Still drawing power though!) While your I/O was idle, it could have been preloading something for you. I see people complain about resource utilization in modern applications, and I can't help but think of the above. RAM does not work like non-volatile storage in that it's best to keep some free space available. Frankly, in an ideal world, your RAM allocation would always be 100% Assuming your machine is performing any work at all (and not just idling) ideally it would do so utilizing the entire CPU, so it could complete quickly. Assuming you're going to be using your machine in the near future, your I/O may be sitting idle, but ideally it would be preloading things you were planning to use, so it could launch faster. My point is this: Utilization is a good thing, in many if not most cases. What's that old saw? Idle hands are the devil's playground. Your computer is like that. I like to see my CPU work hard when it works at all. I like to see my RAM utilization be *at least* half even at idle. I like to see my storage ticking away a bit in the background, doing its lazy writes. This means my computer isn't wasting my time. Just sayin'

              Check out my IoT graphics library here: https://honeythecodewitch.com/gfx And my IoT UI/User Experience library here: https://honeythecodewitch.com/uix

              T Offline
              T Offline
              trønderen
              wrote on last edited by
              #26

              Whenever you experience a bottleneck, inspect your system for other, significantly under-utilized resources, and be honest to yourself, saying: OK, I wasted too much resources on that, and on that! I could have done away with a lot less on those parts.

              honey the codewitchH 1 Reply Last reply
              0
              • honey the codewitchH honey the codewitch

                Firmware has other considerations. I'm talking PCs primarily, user machines. If those resources are queued up and preallocated they are that much *more* ready to use than if you suddenly need gigs of RAM waiting in the wings. This is precisely why modern apps, and frameworks (like .NET) do it.

                Check out my IoT graphics library here: https://honeythecodewitch.com/gfx And my IoT UI/User Experience library here: https://honeythecodewitch.com/uix

                U Offline
                U Offline
                User 13269747
                wrote on last edited by
                #27

                Quote:

                I'm talking PCs primarily, user machines.

                In this hypothetical ideal world where everything is at 100% utilisation on a user's PC, anything the user does (like moving the mouse 2mm to the left) will have to wait for the utilisation to drop before that action can be completed. Even in this hypothetical world scenario, it still seems like a bad idea to have everything at 100% utilisation: users don't want a 15s latency each time they move the mouse. (In the real world, of course, it's worse - CPUs and cores scale their power drawn with their load - increasing the load to 100% makes them draw more power. In the real world, it makes sense to have as little CPU utilisation as possible, and to leave as much RAM as possible for unpredictable overhead.)

                honey the codewitchH 1 Reply Last reply
                0
                • U User 13269747

                  Quote:

                  I'm talking PCs primarily, user machines.

                  In this hypothetical ideal world where everything is at 100% utilisation on a user's PC, anything the user does (like moving the mouse 2mm to the left) will have to wait for the utilisation to drop before that action can be completed. Even in this hypothetical world scenario, it still seems like a bad idea to have everything at 100% utilisation: users don't want a 15s latency each time they move the mouse. (In the real world, of course, it's worse - CPUs and cores scale their power drawn with their load - increasing the load to 100% makes them draw more power. In the real world, it makes sense to have as little CPU utilisation as possible, and to leave as much RAM as possible for unpredictable overhead.)

                  honey the codewitchH Offline
                  honey the codewitchH Offline
                  honey the codewitch
                  wrote on last edited by
                  #28

                  To be clear I did not say the CPU should *stay* at 100%. I said when it's performing work, it should use it all. And yes, realistically you want about 10% off the top for the scheduler to work effectively, if I'm being technical.

                  Check out my IoT graphics library here: https://honeythecodewitch.com/gfx And my IoT UI/User Experience library here: https://honeythecodewitch.com/uix

                  1 Reply Last reply
                  0
                  • honey the codewitchH honey the codewitch

                    Edit: To be clear I'm talking about user facing machines rather than server or embedded, and a hypothetical ideal. In practice CPUs need about 10% off the top to keep their scheduler working, for example, and there are a lot of details I'm glossing over in this post, so it would be a good idea to read the comments before replying. There has been a lot of ground covered since. When your CPU core(s) aren't performing tasks, they are idle hands. When your RAM is not allocated, it's doing no useful work. (Still drawing power though!) While your I/O was idle, it could have been preloading something for you. I see people complain about resource utilization in modern applications, and I can't help but think of the above. RAM does not work like non-volatile storage in that it's best to keep some free space available. Frankly, in an ideal world, your RAM allocation would always be 100% Assuming your machine is performing any work at all (and not just idling) ideally it would do so utilizing the entire CPU, so it could complete quickly. Assuming you're going to be using your machine in the near future, your I/O may be sitting idle, but ideally it would be preloading things you were planning to use, so it could launch faster. My point is this: Utilization is a good thing, in many if not most cases. What's that old saw? Idle hands are the devil's playground. Your computer is like that. I like to see my CPU work hard when it works at all. I like to see my RAM utilization be *at least* half even at idle. I like to see my storage ticking away a bit in the background, doing its lazy writes. This means my computer isn't wasting my time. Just sayin'

                    Check out my IoT graphics library here: https://honeythecodewitch.com/gfx And my IoT UI/User Experience library here: https://honeythecodewitch.com/uix

                    P Offline
                    P Offline
                    Peter Adam
                    wrote on last edited by
                    #29

                    When my AMD A4 / 8 GB RAM / 128 GB SSD rig suffered 100% CPU utilization / 99% RAM utilization for minutes (SQL, some Python) while my boss stood behind me, he asked that would I urgently need a new box. I've said no, because the company owners are happy now, we use every bit of the kit they provided us, no money invested in idle, quickly aging tech.

                    1 Reply Last reply
                    0
                    • T trønderen

                      Whenever you experience a bottleneck, inspect your system for other, significantly under-utilized resources, and be honest to yourself, saying: OK, I wasted too much resources on that, and on that! I could have done away with a lot less on those parts.

                      honey the codewitchH Offline
                      honey the codewitchH Offline
                      honey the codewitch
                      wrote on last edited by
                      #30

                      Sure, absolutely. Getting a utilization profile can uncover a lot about your system. I actually had fun sourcing my PC components to be perfectly matched so that I didn't experience unavoidable bottlenecks in what I use it for. But it's more than that of course. That doesn't even cover the software angle. Why is my zip decompression only using 30% of my I/O? is my CPU too slow? That sort of thing. It's interesting, too.

                      Check out my IoT graphics library here: https://honeythecodewitch.com/gfx And my IoT UI/User Experience library here: https://honeythecodewitch.com/uix

                      1 Reply Last reply
                      0
                      • honey the codewitchH honey the codewitch

                        That's actually in theory a good idea. I wonder why they stopped allocating all of it.

                        Check out my IoT graphics library here: https://honeythecodewitch.com/gfx And my IoT UI/User Experience library here: https://honeythecodewitch.com/uix

                        J Offline
                        J Offline
                        jochance
                        wrote on last edited by
                        #31

                        I think they just changed it so it doesn't appear that way anymore. It still predictively loads things into RAM but the presentation is different so it doesn't appear that RAM is used. I think they changed that because people were like "WTF MSFT WHY USE ALL MY RAM?!" It takes almost nothing for the OS to chuck it and use it for whatever is actually needed instead of what it predicted if it got it wrong.

                        D 1 Reply Last reply
                        0
                        • honey the codewitchH honey the codewitch

                          Edit: To be clear I'm talking about user facing machines rather than server or embedded, and a hypothetical ideal. In practice CPUs need about 10% off the top to keep their scheduler working, for example, and there are a lot of details I'm glossing over in this post, so it would be a good idea to read the comments before replying. There has been a lot of ground covered since. When your CPU core(s) aren't performing tasks, they are idle hands. When your RAM is not allocated, it's doing no useful work. (Still drawing power though!) While your I/O was idle, it could have been preloading something for you. I see people complain about resource utilization in modern applications, and I can't help but think of the above. RAM does not work like non-volatile storage in that it's best to keep some free space available. Frankly, in an ideal world, your RAM allocation would always be 100% Assuming your machine is performing any work at all (and not just idling) ideally it would do so utilizing the entire CPU, so it could complete quickly. Assuming you're going to be using your machine in the near future, your I/O may be sitting idle, but ideally it would be preloading things you were planning to use, so it could launch faster. My point is this: Utilization is a good thing, in many if not most cases. What's that old saw? Idle hands are the devil's playground. Your computer is like that. I like to see my CPU work hard when it works at all. I like to see my RAM utilization be *at least* half even at idle. I like to see my storage ticking away a bit in the background, doing its lazy writes. This means my computer isn't wasting my time. Just sayin'

                          Check out my IoT graphics library here: https://honeythecodewitch.com/gfx And my IoT UI/User Experience library here: https://honeythecodewitch.com/uix

                          C Offline
                          C Offline
                          Choroid
                          wrote on last edited by
                          #32

                          At one time I was interested in how a computer used resources The learning curve was too great with no background in electrical engendering I gave up and was just happy if it worked BUT I remember reading about "Bank Switching" So is this still a concept used in a personal computer today? And if so is it good design to manage system resources on personal computers?

                          Quote:

                          What's that old saw? Idle hands are the devil's playground. Your computer is like that.

                          Why ask a machine to run full tilt if it only needs 50% resource to do the job? If the job gets bigger then or another job needs resource's there is a reserve. If this an illogical thought process happy to hear why I should have stayed out of this conversation

                          honey the codewitchH T 2 Replies Last reply
                          0
                          • C Choroid

                            At one time I was interested in how a computer used resources The learning curve was too great with no background in electrical engendering I gave up and was just happy if it worked BUT I remember reading about "Bank Switching" So is this still a concept used in a personal computer today? And if so is it good design to manage system resources on personal computers?

                            Quote:

                            What's that old saw? Idle hands are the devil's playground. Your computer is like that.

                            Why ask a machine to run full tilt if it only needs 50% resource to do the job? If the job gets bigger then or another job needs resource's there is a reserve. If this an illogical thought process happy to hear why I should have stayed out of this conversation

                            honey the codewitchH Offline
                            honey the codewitchH Offline
                            honey the codewitch
                            wrote on last edited by
                            #33

                            Because there is usually more work a computer *could* be doing.

                            Check out my IoT graphics library here: https://honeythecodewitch.com/gfx And my IoT UI/User Experience library here: https://honeythecodewitch.com/uix

                            1 Reply Last reply
                            0
                            • honey the codewitchH honey the codewitch

                              I wouldn't say *anything*, but I do hear you. Certainly thrashing is a concern with something like virtual memory, but I'm not even necessarily talking about vmem here. With the memory example, my point was simply about a hypothetical ideal. It takes the same amount of power to run 32GB of allocated memory as it does 32GB of unallocated memory, so if you're not using that memory for something, it's in effect, being wasted. In the standard case, this would be an OS responsibility, and if an OS wanted to approach that ideal, it might use something, like an internal ramdisk to preload commonly used apps and data for example. May as well. It's not being used for anything else, and if you run out, you just start dumping all your ramdisk. Only after it's gone, start going to vmem. Something like that. It's just an idea, there are a million ways to use RAM. I/O (to storage) is really where your thrashing occurs, and historically there was literal thrashing due to the moving parts involved, even though that's so often not the case anymore. But again, the idea would be in an ideal "typical" situation, an OS would manage that, and run any preloads at idle time, and make them lower priority than anything else. In effect, as long as everything you're doing on top of idling is basically "disposable" thrashing won't be much of a concern. The CPU is a bit of an animal, in that you'll need about 10% of it to run the scheduler effectively, and without that, everything else falls apart. So yeah, with a CPU it's more like 80-90% utilization, although 100% is acceptable for bursts. In any case, I worded my post carefully to dictate that the CPU should be utilized when it has something to do. It's not the case that I'd necessarily want to "find" things to do with it the way I would with RAM. It's that when it does need to do something, it expands like a lil puffer fish and uses all of its threading power toward a task - again, ideal scenario. The reason for the discrepancy here, vs say with RAM is because of power concerns. RAM uses the same power regardless. A CPU varies with task so it should be allowed to idle if that makes sense. I hope this clears things up rather than making it worse. :laugh:

                              Check out my IoT graphics library here: https://honeythecodewitch.com/gfx And my IoT UI/User Experience library here: https://honeythecodewitch.com/uix

                              W Offline
                              W Offline
                              wapiti64
                              wrote on last edited by
                              #34

                              RAM does not use the same power regardless of state. For example, see below at about the 8:35 mark for results We're modding a DDR5 Module and measure the Power Consumption - YouTube[^]

                              honey the codewitchH 1 Reply Last reply
                              0
                              • W wapiti64

                                RAM does not use the same power regardless of state. For example, see below at about the 8:35 mark for results We're modding a DDR5 Module and measure the Power Consumption - YouTube[^]

                                honey the codewitchH Offline
                                honey the codewitchH Offline
                                honey the codewitch
                                wrote on last edited by
                                #35

                                Maybe I'm misinformed, or maybe DDR5 does something previous RAM doesn't to save power. Neither would surprise me.

                                Check out my IoT graphics library here: https://honeythecodewitch.com/gfx And my IoT UI/User Experience library here: https://honeythecodewitch.com/uix

                                W 1 Reply Last reply
                                0
                                • honey the codewitchH honey the codewitch

                                  Maybe I'm misinformed, or maybe DDR5 does something previous RAM doesn't to save power. Neither would surprise me.

                                  Check out my IoT graphics library here: https://honeythecodewitch.com/gfx And my IoT UI/User Experience library here: https://honeythecodewitch.com/uix

                                  W Offline
                                  W Offline
                                  wapiti64
                                  wrote on last edited by
                                  #36

                                  I am pretty sure all RAM requires more energy to read/write than to just be powered up. Similar to a SSD or NVME drive

                                  honey the codewitchH 1 Reply Last reply
                                  0
                                  • W wapiti64

                                    I am pretty sure all RAM requires more energy to read/write than to just be powered up. Similar to a SSD or NVME drive

                                    honey the codewitchH Offline
                                    honey the codewitchH Offline
                                    honey the codewitch
                                    wrote on last edited by
                                    #37

                                    My understanding is that DRAM needs constant periodic refresh voltage to maintain its data Memory refresh - Wikipedia[^] So it's not act of reading or writing like an NVMe. It works kind of like an LCD does, in that the charge is sent to the LCD panel over and over with whatever the data is at that point. In effect, the writes are always happening regardless, at least to my understanding.

                                    Check out my IoT graphics library here: https://honeythecodewitch.com/gfx And my IoT UI/User Experience library here: https://honeythecodewitch.com/uix

                                    T 1 Reply Last reply
                                    0
                                    • C Choroid

                                      At one time I was interested in how a computer used resources The learning curve was too great with no background in electrical engendering I gave up and was just happy if it worked BUT I remember reading about "Bank Switching" So is this still a concept used in a personal computer today? And if so is it good design to manage system resources on personal computers?

                                      Quote:

                                      What's that old saw? Idle hands are the devil's playground. Your computer is like that.

                                      Why ask a machine to run full tilt if it only needs 50% resource to do the job? If the job gets bigger then or another job needs resource's there is a reserve. If this an illogical thought process happy to hear why I should have stayed out of this conversation

                                      T Offline
                                      T Offline
                                      trønderen
                                      wrote on last edited by
                                      #38

                                      Choroid wrote:

                                      BUT I remember reading about "Bank Switching" So is this still a concept used in a personal computer today?

                                      No. 32 or 64 bit logical addressing removed the need for it. The problem in the old PCs was the addressing range. Most single-chip CPUs were 8-bit, with 16 bit addresses, so they could only handle 2**16 = 64 Ki bytes of RAM; there was no way to identify any more. Then came the LIM standard for banking: You could set up your system with, say, 48 Ki plain RAM and the upper 16 Ki of the address space handled by a LIM card, providing several 16 Ki blocks (or "banks") of physical RAM, but you could use only one of them at a time. You had to tell the LIM card (through I/O instructions) which of the banks to enable. You would usually put code, not data, in the banked part: In the un-banked (lower 48 Ki) part, you put a stub that is called from other places. This stub tells the LIM card to enable the right bank where the actual code is placed, and jump to the code address in that bank. If this function called functions in other banks, it would have to call via an unbanked stub to switch to the right bank. Upon return, the previous bank had to be enabled again, to let the caller continue. It did not lead to blazingly fast performance. Catching data access through a similar stub is not that easy. You could, in an OO world have a object in unbanked RAM with huge data part banked RAM, the object knowing the bank number and address, and channeling all access to it through accessor functions (set, get), but OO was little known in the PC world at that time. I never saw anyone doing this. LIM was a PC concept. On larger machines, you could see memory overlays, which was also based on routine stubs, but the stub would read a code block from disk into RAM, overwriting anything that was there. You had much more flexibility, but PC disks were so slow in those days that it would have been next to unusable. You could say that bank switching is a relative of paging mechanisms. I know of at least one family of 16-bit minis providing 64 Ki words (128 Ki bytes) of address space to each user, but could handle up to 32 Mi bytes of physical RAM. So the 64 terminals hooked up to the mini could have their full address space resident in RAM, with no paging. When the CPU switched its attention from one user to another one, it replaced the page table contents to point to the new user's pages - not that different from telling

                                      C 1 Reply Last reply
                                      0
                                      • honey the codewitchH honey the codewitch

                                        My understanding is that DRAM needs constant periodic refresh voltage to maintain its data Memory refresh - Wikipedia[^] So it's not act of reading or writing like an NVMe. It works kind of like an LCD does, in that the charge is sent to the LCD panel over and over with whatever the data is at that point. In effect, the writes are always happening regardless, at least to my understanding.

                                        Check out my IoT graphics library here: https://honeythecodewitch.com/gfx And my IoT UI/User Experience library here: https://honeythecodewitch.com/uix

                                        T Offline
                                        T Offline
                                        trønderen
                                        wrote on last edited by
                                        #39

                                        The refresh occurs at a far lower frequency than the ordinary reading/writing. You can do several hundred r/w accesses for each refresh.

                                        honey the codewitchH 1 Reply Last reply
                                        0
                                        • J jochance

                                          I think they just changed it so it doesn't appear that way anymore. It still predictively loads things into RAM but the presentation is different so it doesn't appear that RAM is used. I think they changed that because people were like "WTF MSFT WHY USE ALL MY RAM?!" It takes almost nothing for the OS to chuck it and use it for whatever is actually needed instead of what it predicted if it got it wrong.

                                          D Offline
                                          D Offline
                                          David On Life
                                          wrote on last edited by
                                          #40

                                          That's generally correct. My system shows 5 GB available RAM right now; however, the majority of that should be freed pages which point to disk files (including application code) so that if the file (application) is (re)opened, it doesn't need to be read from disk. A small portion (128MB) is zeroed pages, just enough that when an application asks for a blank memory page it can be delivered instantly without waiting to zero it. Windows also has a mechanism for pre-loading pages it expects to need shortly (mostly used during boot which is more predictable) and .NET has similar mechanisms for pre-loading code before it's needed (although it typically requires running optimization tooling to build the pre-loading list, which major apps like VS do but many don't). There's a number of other apps that have 'fast load' setups to pre-load the memory their application uses; however, I often find them annoying as they may pre-load their application even though I have no intention of using it that day...

                                          1 Reply Last reply
                                          0
                                          Reply
                                          • Reply as topic
                                          Log in to reply
                                          • Oldest to Newest
                                          • Newest to Oldest
                                          • Most Votes


                                          • Login

                                          • Don't have an account? Register

                                          • Login or register to search.
                                          • First post
                                            Last post
                                          0
                                          • Categories
                                          • Recent
                                          • Tags
                                          • Popular
                                          • World
                                          • Users
                                          • Groups