Skip to content
  • Categories
  • Recent
  • Tags
  • Popular
  • World
  • Users
  • Groups
Skins
  • Light
  • Cerulean
  • Cosmo
  • Flatly
  • Journal
  • Litera
  • Lumen
  • Lux
  • Materia
  • Minty
  • Morph
  • Pulse
  • Sandstone
  • Simplex
  • Sketchy
  • Spacelab
  • United
  • Yeti
  • Zephyr
  • Dark
  • Cyborg
  • Darkly
  • Quartz
  • Slate
  • Solar
  • Superhero
  • Vapor

  • Default (No Skin)
  • No Skin
Collapse
Code Project
  1. Home
  2. The Lounge
  3. Utilization

Utilization

Scheduled Pinned Locked Moved The Lounge
graphicsdesignasp-netcomsysadmin
46 Posts 20 Posters 3 Views 1 Watching
  • Oldest to Newest
  • Newest to Oldest
  • Most Votes
Reply
  • Reply as topic
Log in to reply
This topic has been deleted. Only users with topic management privileges can see it.
  • W wapiti64

    RAM does not use the same power regardless of state. For example, see below at about the 8:35 mark for results We're modding a DDR5 Module and measure the Power Consumption - YouTube[^]

    honey the codewitchH Offline
    honey the codewitchH Offline
    honey the codewitch
    wrote on last edited by
    #35

    Maybe I'm misinformed, or maybe DDR5 does something previous RAM doesn't to save power. Neither would surprise me.

    Check out my IoT graphics library here: https://honeythecodewitch.com/gfx And my IoT UI/User Experience library here: https://honeythecodewitch.com/uix

    W 1 Reply Last reply
    0
    • honey the codewitchH honey the codewitch

      Maybe I'm misinformed, or maybe DDR5 does something previous RAM doesn't to save power. Neither would surprise me.

      Check out my IoT graphics library here: https://honeythecodewitch.com/gfx And my IoT UI/User Experience library here: https://honeythecodewitch.com/uix

      W Offline
      W Offline
      wapiti64
      wrote on last edited by
      #36

      I am pretty sure all RAM requires more energy to read/write than to just be powered up. Similar to a SSD or NVME drive

      honey the codewitchH 1 Reply Last reply
      0
      • W wapiti64

        I am pretty sure all RAM requires more energy to read/write than to just be powered up. Similar to a SSD or NVME drive

        honey the codewitchH Offline
        honey the codewitchH Offline
        honey the codewitch
        wrote on last edited by
        #37

        My understanding is that DRAM needs constant periodic refresh voltage to maintain its data Memory refresh - Wikipedia[^] So it's not act of reading or writing like an NVMe. It works kind of like an LCD does, in that the charge is sent to the LCD panel over and over with whatever the data is at that point. In effect, the writes are always happening regardless, at least to my understanding.

        Check out my IoT graphics library here: https://honeythecodewitch.com/gfx And my IoT UI/User Experience library here: https://honeythecodewitch.com/uix

        T 1 Reply Last reply
        0
        • C Choroid

          At one time I was interested in how a computer used resources The learning curve was too great with no background in electrical engendering I gave up and was just happy if it worked BUT I remember reading about "Bank Switching" So is this still a concept used in a personal computer today? And if so is it good design to manage system resources on personal computers?

          Quote:

          What's that old saw? Idle hands are the devil's playground. Your computer is like that.

          Why ask a machine to run full tilt if it only needs 50% resource to do the job? If the job gets bigger then or another job needs resource's there is a reserve. If this an illogical thought process happy to hear why I should have stayed out of this conversation

          T Offline
          T Offline
          trønderen
          wrote on last edited by
          #38

          Choroid wrote:

          BUT I remember reading about "Bank Switching" So is this still a concept used in a personal computer today?

          No. 32 or 64 bit logical addressing removed the need for it. The problem in the old PCs was the addressing range. Most single-chip CPUs were 8-bit, with 16 bit addresses, so they could only handle 2**16 = 64 Ki bytes of RAM; there was no way to identify any more. Then came the LIM standard for banking: You could set up your system with, say, 48 Ki plain RAM and the upper 16 Ki of the address space handled by a LIM card, providing several 16 Ki blocks (or "banks") of physical RAM, but you could use only one of them at a time. You had to tell the LIM card (through I/O instructions) which of the banks to enable. You would usually put code, not data, in the banked part: In the un-banked (lower 48 Ki) part, you put a stub that is called from other places. This stub tells the LIM card to enable the right bank where the actual code is placed, and jump to the code address in that bank. If this function called functions in other banks, it would have to call via an unbanked stub to switch to the right bank. Upon return, the previous bank had to be enabled again, to let the caller continue. It did not lead to blazingly fast performance. Catching data access through a similar stub is not that easy. You could, in an OO world have a object in unbanked RAM with huge data part banked RAM, the object knowing the bank number and address, and channeling all access to it through accessor functions (set, get), but OO was little known in the PC world at that time. I never saw anyone doing this. LIM was a PC concept. On larger machines, you could see memory overlays, which was also based on routine stubs, but the stub would read a code block from disk into RAM, overwriting anything that was there. You had much more flexibility, but PC disks were so slow in those days that it would have been next to unusable. You could say that bank switching is a relative of paging mechanisms. I know of at least one family of 16-bit minis providing 64 Ki words (128 Ki bytes) of address space to each user, but could handle up to 32 Mi bytes of physical RAM. So the 64 terminals hooked up to the mini could have their full address space resident in RAM, with no paging. When the CPU switched its attention from one user to another one, it replaced the page table contents to point to the new user's pages - not that different from telling

          C 1 Reply Last reply
          0
          • honey the codewitchH honey the codewitch

            My understanding is that DRAM needs constant periodic refresh voltage to maintain its data Memory refresh - Wikipedia[^] So it's not act of reading or writing like an NVMe. It works kind of like an LCD does, in that the charge is sent to the LCD panel over and over with whatever the data is at that point. In effect, the writes are always happening regardless, at least to my understanding.

            Check out my IoT graphics library here: https://honeythecodewitch.com/gfx And my IoT UI/User Experience library here: https://honeythecodewitch.com/uix

            T Offline
            T Offline
            trønderen
            wrote on last edited by
            #39

            The refresh occurs at a far lower frequency than the ordinary reading/writing. You can do several hundred r/w accesses for each refresh.

            honey the codewitchH 1 Reply Last reply
            0
            • J jochance

              I think they just changed it so it doesn't appear that way anymore. It still predictively loads things into RAM but the presentation is different so it doesn't appear that RAM is used. I think they changed that because people were like "WTF MSFT WHY USE ALL MY RAM?!" It takes almost nothing for the OS to chuck it and use it for whatever is actually needed instead of what it predicted if it got it wrong.

              D Offline
              D Offline
              David On Life
              wrote on last edited by
              #40

              That's generally correct. My system shows 5 GB available RAM right now; however, the majority of that should be freed pages which point to disk files (including application code) so that if the file (application) is (re)opened, it doesn't need to be read from disk. A small portion (128MB) is zeroed pages, just enough that when an application asks for a blank memory page it can be delivered instantly without waiting to zero it. Windows also has a mechanism for pre-loading pages it expects to need shortly (mostly used during boot which is more predictable) and .NET has similar mechanisms for pre-loading code before it's needed (although it typically requires running optimization tooling to build the pre-loading list, which major apps like VS do but many don't). There's a number of other apps that have 'fast load' setups to pre-load the memory their application uses; however, I often find them annoying as they may pre-load their application even though I have no intention of using it that day...

              1 Reply Last reply
              0
              • honey the codewitchH honey the codewitch

                I did. I said in an ideal world RAM utilization would always be at 100%. That's a hypothetical. It's not intended to be real world, but rather illustrative of a point: RAM is always drawing power, even at idle. The most efficient way to use it is to allocate it for something, even if you do so ahead of time. I did not say that it would or even should be utilized by one application.

                Check out my IoT graphics library here: https://honeythecodewitch.com/gfx And my IoT UI/User Experience library here: https://honeythecodewitch.com/uix

                S Offline
                S Offline
                SeattleC
                wrote on last edited by
                #41

                I think the "ideal world" you imagine ceased to exist with the invention of multitasking. To me, the ideal world is one in which the demands of the 200 processes running on a PC for memory, I/O bandwidth, and other resources are balanced by the operating system to achieve the best overall performance. Run one memory-hogging program on a PC and you get 100% memory utilization. Run two such programs and what you get is virtual memory page thrashing and a thousandfold decrease in performance. I remember early Java programs that were like this.

                T 1 Reply Last reply
                0
                • T trønderen

                  The refresh occurs at a far lower frequency than the ordinary reading/writing. You can do several hundred r/w accesses for each refresh.

                  honey the codewitchH Offline
                  honey the codewitchH Offline
                  honey the codewitch
                  wrote on last edited by
                  #42

                  Ah, good to know.

                  Check out my IoT graphics library here: https://honeythecodewitch.com/gfx And my IoT UI/User Experience library here: https://honeythecodewitch.com/uix

                  1 Reply Last reply
                  0
                  • honey the codewitchH honey the codewitch

                    Edit: To be clear I'm talking about user facing machines rather than server or embedded, and a hypothetical ideal. In practice CPUs need about 10% off the top to keep their scheduler working, for example, and there are a lot of details I'm glossing over in this post, so it would be a good idea to read the comments before replying. There has been a lot of ground covered since. When your CPU core(s) aren't performing tasks, they are idle hands. When your RAM is not allocated, it's doing no useful work. (Still drawing power though!) While your I/O was idle, it could have been preloading something for you. I see people complain about resource utilization in modern applications, and I can't help but think of the above. RAM does not work like non-volatile storage in that it's best to keep some free space available. Frankly, in an ideal world, your RAM allocation would always be 100% Assuming your machine is performing any work at all (and not just idling) ideally it would do so utilizing the entire CPU, so it could complete quickly. Assuming you're going to be using your machine in the near future, your I/O may be sitting idle, but ideally it would be preloading things you were planning to use, so it could launch faster. My point is this: Utilization is a good thing, in many if not most cases. What's that old saw? Idle hands are the devil's playground. Your computer is like that. I like to see my CPU work hard when it works at all. I like to see my RAM utilization be *at least* half even at idle. I like to see my storage ticking away a bit in the background, doing its lazy writes. This means my computer isn't wasting my time. Just sayin'

                    Check out my IoT graphics library here: https://honeythecodewitch.com/gfx And my IoT UI/User Experience library here: https://honeythecodewitch.com/uix

                    L Offline
                    L Offline
                    Lost User
                    wrote on last edited by
                    #43

                    My current utilization is: 3 monitors (one playing internet TV); 2 Edge browsers; Outlook; 2 file explorers; 3 different graphics programs; 3 open PDF's; Character Map; 4 - 2022 VS windows; Snipping Tool; 2 image viewers; and Task Manager. 63% memory (out of 16GB) 4% CPU (Ryzen 7)

                    "Before entering on an understanding, I have meditated for a long time, and have foreseen what might happen. It is not genius which reveals to me suddenly, secretly, what I have to say or to do in a circumstance unexpected by other people; it is reflection, it is meditation." - Napoleon I

                    1 Reply Last reply
                    0
                    • S SeattleC

                      I think the "ideal world" you imagine ceased to exist with the invention of multitasking. To me, the ideal world is one in which the demands of the 200 processes running on a PC for memory, I/O bandwidth, and other resources are balanced by the operating system to achieve the best overall performance. Run one memory-hogging program on a PC and you get 100% memory utilization. Run two such programs and what you get is virtual memory page thrashing and a thousandfold decrease in performance. I remember early Java programs that were like this.

                      T Offline
                      T Offline
                      trønderen
                      wrote on last edited by
                      #44

                      SeattleC++ wrote:

                      Run two such programs and what you get is virtual memory page thrashing and a thousandfold decrease in performance.

                      If that really was the case, I would immediately throw that OS out of the window! Of course you cannot expect 200% performance - 100% for each process. You must expect that the process (hence working set) switching take some resources. But no program uses all the memory all the time; reality is that even when you think your program is all over the place, there are plenty of untouched physical memory pages that can be used by another process. Any decent MMS hardware and OS can handle that quite well. If your program really does make use of 100%, then any 10% (or maybe even 1%) increase in the data structures of that single program would take long strides towards that "thousandfold decrease in performance". If you keep insisting that your program actually makes use of 100% of the RAM: Take a look in Resource Monitor, the Memory tab: Is it really true that the color bar is all green, "In use", or orange, "Modified"? No dark or light blue? If you flush memory - my tool for doing that is Sysinternals RamMap; its Empty menu has commands for emptying standby lists and flushing modified pages - there is a definite chance that the color bar goes at least a little blue at the right end. Probably much more than you would expect! Let your program run, and see how long it takes before all that blue has turned green/orange. Probably much longer than you would expect! I am of course assuming that you have a "reasonable" amount of RAM. In the old days of 16 bit minis, a memory card with a mebibyte of RAM cost around USD 10,000 (the Euro wasn't invented then); inflation would bring that to USD 50,000 today, so you didn't buy RAM that you didn't need. This one mini had an OS that would actually run (or maybe I should say 'crawl') with two 2 Ki pages of RAM available to user processes (the rest taken by the OS. "4 Ki should be enough for everybody!" :-)). The only ones actually running on 4 Ki for paging were the OS developers doing stress tests to see if practice matched theory. It did, but that configuration failed to enter the Top500 list :-). Those OS developers claimed that any system doing physical paging more than 5% of the time is heavily starved on memory. I have never encountered any production system doing that much paging. But if you regularly run two processes side by side, each with an active w

                      1 Reply Last reply
                      0
                      • honey the codewitchH honey the codewitch

                        I did. I said in an ideal world RAM utilization would always be at 100%. That's a hypothetical. It's not intended to be real world, but rather illustrative of a point: RAM is always drawing power, even at idle. The most efficient way to use it is to allocate it for something, even if you do so ahead of time. I did not say that it would or even should be utilized by one application.

                        Check out my IoT graphics library here: https://honeythecodewitch.com/gfx And my IoT UI/User Experience library here: https://honeythecodewitch.com/uix

                        R Offline
                        R Offline
                        Ralf Quint
                        wrote on last edited by
                        #45

                        With 100% CPU utilization, you will find that you can barely move the mouse or press a key in Windows (or macOS, Linux). Likewise, at least on Windows, if your RAM allocation goes above 90% or the "available" RAM as shown in the taskbar will go below 1GB, what ever comes first, your system will become significantly sluggish. So no, your "ideal world" doesn't exist, and thus 100% utilization of any computer resource will lead to a pretty much impossible to use system.

                        1 Reply Last reply
                        0
                        • T trønderen

                          Choroid wrote:

                          BUT I remember reading about "Bank Switching" So is this still a concept used in a personal computer today?

                          No. 32 or 64 bit logical addressing removed the need for it. The problem in the old PCs was the addressing range. Most single-chip CPUs were 8-bit, with 16 bit addresses, so they could only handle 2**16 = 64 Ki bytes of RAM; there was no way to identify any more. Then came the LIM standard for banking: You could set up your system with, say, 48 Ki plain RAM and the upper 16 Ki of the address space handled by a LIM card, providing several 16 Ki blocks (or "banks") of physical RAM, but you could use only one of them at a time. You had to tell the LIM card (through I/O instructions) which of the banks to enable. You would usually put code, not data, in the banked part: In the un-banked (lower 48 Ki) part, you put a stub that is called from other places. This stub tells the LIM card to enable the right bank where the actual code is placed, and jump to the code address in that bank. If this function called functions in other banks, it would have to call via an unbanked stub to switch to the right bank. Upon return, the previous bank had to be enabled again, to let the caller continue. It did not lead to blazingly fast performance. Catching data access through a similar stub is not that easy. You could, in an OO world have a object in unbanked RAM with huge data part banked RAM, the object knowing the bank number and address, and channeling all access to it through accessor functions (set, get), but OO was little known in the PC world at that time. I never saw anyone doing this. LIM was a PC concept. On larger machines, you could see memory overlays, which was also based on routine stubs, but the stub would read a code block from disk into RAM, overwriting anything that was there. You had much more flexibility, but PC disks were so slow in those days that it would have been next to unusable. You could say that bank switching is a relative of paging mechanisms. I know of at least one family of 16-bit minis providing 64 Ki words (128 Ki bytes) of address space to each user, but could handle up to 32 Mi bytes of physical RAM. So the 64 terminals hooked up to the mini could have their full address space resident in RAM, with no paging. When the CPU switched its attention from one user to another one, it replaced the page table contents to point to the new user's pages - not that different from telling

                          C Offline
                          C Offline
                          Choroid
                          wrote on last edited by
                          #46

                          Double WOW and Thank You for the time you put into this reply A lot of computer design information here Don't Google LIM Card only found one with LIM Card Computer that is a door relay ? ? Makes me wonder how the industry made FASTER PC disk's The quote from the "Famous Industry Leader" I think tells me why Windows 11 needed a new hardware configuration the OS got too big let the end use buy new machines if they want our OS I have a Dell T7600 Precision Workstation (Refurbish) with Xeon E5 2.9GHz 2 processors & 64 GB RAM 64 bit Graphics Card Nvidia P400 only 2 GB The OEM died Thank goodness for Plug & Play Windows 7 Pro It works I do not make any large demands other than VS 2019 and SQLite DB

                          1 Reply Last reply
                          0
                          Reply
                          • Reply as topic
                          Log in to reply
                          • Oldest to Newest
                          • Newest to Oldest
                          • Most Votes


                          • Login

                          • Don't have an account? Register

                          • Login or register to search.
                          • First post
                            Last post
                          0
                          • Categories
                          • Recent
                          • Tags
                          • Popular
                          • World
                          • Users
                          • Groups