Skip to content
  • Categories
  • Recent
  • Tags
  • Popular
  • World
  • Users
  • Groups
Skins
  • Light
  • Cerulean
  • Cosmo
  • Flatly
  • Journal
  • Litera
  • Lumen
  • Lux
  • Materia
  • Minty
  • Morph
  • Pulse
  • Sandstone
  • Simplex
  • Sketchy
  • Spacelab
  • United
  • Yeti
  • Zephyr
  • Dark
  • Cyborg
  • Darkly
  • Quartz
  • Slate
  • Solar
  • Superhero
  • Vapor

  • Default (No Skin)
  • No Skin
Collapse
Code Project
  1. Home
  2. The Lounge
  3. Vista memory usage

Vista memory usage

Scheduled Pinned Locked Moved The Lounge
questionhtmlcomperformance
42 Posts 19 Posters 0 Views 1 Watching
  • Oldest to Newest
  • Newest to Oldest
  • Most Votes
Reply
  • Reply as topic
Log in to reply
This topic has been deleted. Only users with topic management privileges can see it.
  • A Andy Brummer

    That sounds like it is a much better way to handle memory in principle, and I'd applaud it if it didn't require adding an extra gig of memory and another hard drive just to get back to xp levels of performance. Performance optimizations should improve performance for a common range of uses, not just a few cases. [edit]Just like you would think that moving a site from ASP to ASP.net would improve performance, and provide more options for usability. Sorry Chris.[/edit]


    I can imagine the sinking feeling one would have after ordering my book, only to find a laughably ridiculous theory with demented logic once the book arrives - Mark McCutcheon

    M Offline
    M Offline
    Member 96
    wrote on last edited by
    #29

    Andy Brummer wrote:

    Just like you would think that moving a site from ASP to ASP.net would improve performance, and provide more options for usability

    Boo! :) Having been a person who attempted *really* hard over a 4 month period to write an asp site in c++ back in the day for a semi sophisticated application and then realizing it was actually easier to write the entire web *server* myself (and I did) and now having done nearly the same task in asp.net in a matter of a few weeks last year I'm dead certain Chris made a fantastic decision. ;)


    All programmers are playwrights and all computers are lousy actors.

    A 1 Reply Last reply
    0
    • M Member 96

      Andy Brummer wrote:

      Just like you would think that moving a site from ASP to ASP.net would improve performance, and provide more options for usability

      Boo! :) Having been a person who attempted *really* hard over a 4 month period to write an asp site in c++ back in the day for a semi sophisticated application and then realizing it was actually easier to write the entire web *server* myself (and I did) and now having done nearly the same task in asp.net in a matter of a few weeks last year I'm dead certain Chris made a fantastic decision. ;)


      All programmers are playwrights and all computers are lousy actors.

      A Offline
      A Offline
      Andy Brummer
      wrote on last edited by
      #30

      John C wrote:

      Having been a person who attempted *really* hard over a 4 month period to write an asp site in c++ back in the day

      masochist

      John C wrote:

      now having done nearly the same task in asp.net in a matter of a few weeks last year I'm dead certain Chris made a fantastic decision.

      That's a given. It's still frustrating to watch it unfold. CP is going to rock when all the kinks get worked out.

      I can imagine the sinking feeling one would have after ordering my book, only to find a laughably ridiculous theory with demented logic once the book arrives - Mark McCutcheon

      1 Reply Last reply
      0
      • M Member 96

        Thunderbox666 wrote:

        it is taking far more time to get accepted then XP or any of the previous versions have

        Hmm...maybe my memory is going but I seem to recall back in the old Usenet days when I ran a BBS there was an ongoing "holy war" over windows 95 that lasted at least 2 years. People talked of how it was crazy resource hungry and that microsoft was going to have to keep up support for windows 3.1 because it was so much faster and more efficient. Sound familiar? And in those days windows 95 was being attacked heavily by the OS2 guys who (rightfully) showed many things they did better and scoffed at windows 95 calling itself object oriented. I remember a lot of holdouts over XP and a lot of bitching about ram use etc etc. Sorry but none of this is new and I can gurantee you I'll win that buck. :)


        All programmers are playwrights and all computers are lousy actors.

        T Offline
        T Offline
        Thunderbox666
        wrote on last edited by
        #31

        John C wrote:

        Sound familiar?

        Nope :-D I would have been just starting primary school lol Im still only 19


        "There are three sides to every story. Yours, mine and the truth" ~ unknown

        D 1 Reply Last reply
        0
        • M Mike Dimmick

          The answers to some of your questions may be found in "Windows Internals, Fourth Edition". The only difference with loading data into 'used' memory rather than 'free' memory is that the 'used' memory has to be taken away from whatever working set it belongs to first. Windows basically has a few categories of memory status: assigned to one or more working sets, 'standby' (trimmed from working set, either read-only or writable but changes already written back to either the original file [memory-mapped files] or to the pagefile), 'modified' (trimmed from working set, unsaved changes not yet written back), 'free' (link back to working set no longer retained, contents unknown), or 'zero' (all bytes known to be zero because zeros written by the idle thread). When allocating physical memory to a working set, if the memory is going to be used by kernel-mode code only or immediately filled by data coming from disk, the OS takes a page from the 'free' list. Otherwise it will take memory from the 'zero' list to ensure that the process can't see another process's data - this is for security. If the appropriate list is exhausted, the OS tries the other one, zeroing a page from the free list if necessary. If that is also exhausted, it then tries the standby list, but to use a page from that list, it has to unlink it from the invalid page table entry that was pointing to that page, which takes a bit more time. If no pages are available from the standby list the OS will write out a page from the modified list so it can reuse it - this is the only time that eager swapping occurs. When the OS decides, periodically or if memory demands get too great, that a working set is too big, the least-recently-used pages will be trimmed, that is, the corresponding page table entries will be marked inactive. (The processor will then generate a page fault the next time any code touches the page.) If modified since last written (tracked automatically by the processor setting a bit in the page table entry), the page goes onto the modified list, otherwise onto the standby list. Background threads then lazily write back data from the modified list at which point the pages are put on the standby list. However, the page table entry still contains information about which physical page was used. When a page fault occurs, the page fault handler code first checks whether the data is still actually in memory on the standby or modified lists and if so, simply fixes up the PTE to be valid again, takes the page off the corresponding list, and dismisse

          C Offline
          C Offline
          Cyrilix
          wrote on last edited by
          #32

          Very nice post, Mike.

          1 Reply Last reply
          0
          • M martin_hughes

            CataclysmicQuantums wrote:

            When are programmers going to make computers use their resources to do more useful and sophisticated things instead of being lazy asses and writing things like this...

            Poor coding is one thing, wanting to watch full screen HD movies on your computer is quite another.

            CataclysmicQuantums wrote:

            Ssy that to the family who worked hard to save up just enough money to buy a family computer.

            I will. Further more I'll tell them to seek expert advice on the specifications of their new PC before parting with the cash. Besides which, owning a computer is more affordable now than it has ever been; an additional 2GB's of RAM from Crucial costs $117.99.

            "On one of my cards it said I had to find temperatures lower than -8. The numbers I uncovered were -6 and -7 so I thought I had won, and so did the woman in the shop. But when she scanned the card the machine said I hadn't. "I phoned Camelot and they fobbed me off with some story that -6 is higher - not lower - than -8 but I'm not having it." -Tina Farrell, a 23 year old thicky from Levenshulme, Manchester.

            C Offline
            C Offline
            Cyrilix
            wrote on last edited by
            #33

            117.99 for 2 GB? I'm sure if you search harder, you can find much better deals than that. :)

            1 Reply Last reply
            0
            • D Dirk Higbee

              why oh why does everyone seem to have troubles with OS? Does anyone do a custome install and proper config. I am currently running Vista on a P4 with 1GB memory. I have my iTunes running, I'm here roaming around CP, and I am doing Google searches in another browser and I am using about 45% of my memory. I can open VS2008 Express and work on a project also and still not use all my memory. What is everyone doing that is giving them problems?

              If you can read, you can learn

              R Offline
              R Offline
              Rocky Moore
              wrote on last edited by
              #34

              That 45% can be tricky. You need to pay attention to the swap file also, virtual ram can be storing a bunch and causing a lower ram consumption rating. I know on my system running Vista Ultimate 64 with only 1 GB RAM (have a bad memory bank on my board and do not have time to fix currently), I often had my system come to almost an entire halt. Of course, having several IE instances, SQL Sever, SQL Managment Studio and Visual Studio (2008 beta back then) all going at the same time was a good part of the problem :). Anyway, I was getting to the point I figured I had to fix the system so I could add more RAM, but thought I would ReadyBoost a try and purchased a 4GB ReadyBoost compatible Flash Drive and gave it a whirl. While the system can take spawn out a lot of virtual ram at times, it no longer locks up my system, it remains mostly usable even under load. Although, that is about the only difference I noticed using ReadyBoost, it just made the virtual ram access much more survivable, but that is enough, it justified the purchase.

              Rocky <>< Blog Post: Silverlight goes Beta 2.0 Tech Blog Post: Cheap Biofuels and Synthetics coming soon?

              1 Reply Last reply
              0
              • M Member 96

                I honestly haven't seen that as a problem. The only thing that has annoyed me about it is the constant hard drive access when I'm doing nothing that should be accessing the hard drive. I think it won't be long before hard drives with moving parts are obsolete (hopefully) anyway so it's all kind of a moot point then.


                All programmers are playwrights and all computers are lousy actors.

                R Offline
                R Offline
                Rocky Moore
                wrote on last edited by
                #35

                Constant HD access is not always for this reason. If virtual RAM is moving, it can cause it. Then there is Searc Index service that can keep it spinning. Along with Defender and that one SNV client (tortise or something like that). There was one more along these lines, but do not recall what it was. The old SysInternals drive monitor was handy in finding out all the services pulling my drive around. At least now my HD drive light gets a bit of a rest :)

                Rocky <>< Blog Post: Silverlight goes Beta 2.0 Tech Blog Post: Cheap Biofuels and Synthetics coming soon?

                M 1 Reply Last reply
                0
                • R Rocky Moore

                  Constant HD access is not always for this reason. If virtual RAM is moving, it can cause it. Then there is Searc Index service that can keep it spinning. Along with Defender and that one SNV client (tortise or something like that). There was one more along these lines, but do not recall what it was. The old SysInternals drive monitor was handy in finding out all the services pulling my drive around. At least now my HD drive light gets a bit of a rest :)

                  Rocky <>< Blog Post: Silverlight goes Beta 2.0 Tech Blog Post: Cheap Biofuels and Synthetics coming soon?

                  M Offline
                  M Offline
                  Member 96
                  wrote on last edited by
                  #36

                  It was either superfetch or indexing, I fired up process monitor (replacement for the old sysinternals thing) and narrowed it down to one or the other or both and shut them both down a while back. I have a quiet office and the contstant rumbling from my SATA array was pissing me off. My computer is plenty fast without those features turned on.


                  When everyone is a hero no one is a hero.

                  1 Reply Last reply
                  0
                  • M Mike Dimmick

                    The answers to some of your questions may be found in "Windows Internals, Fourth Edition". The only difference with loading data into 'used' memory rather than 'free' memory is that the 'used' memory has to be taken away from whatever working set it belongs to first. Windows basically has a few categories of memory status: assigned to one or more working sets, 'standby' (trimmed from working set, either read-only or writable but changes already written back to either the original file [memory-mapped files] or to the pagefile), 'modified' (trimmed from working set, unsaved changes not yet written back), 'free' (link back to working set no longer retained, contents unknown), or 'zero' (all bytes known to be zero because zeros written by the idle thread). When allocating physical memory to a working set, if the memory is going to be used by kernel-mode code only or immediately filled by data coming from disk, the OS takes a page from the 'free' list. Otherwise it will take memory from the 'zero' list to ensure that the process can't see another process's data - this is for security. If the appropriate list is exhausted, the OS tries the other one, zeroing a page from the free list if necessary. If that is also exhausted, it then tries the standby list, but to use a page from that list, it has to unlink it from the invalid page table entry that was pointing to that page, which takes a bit more time. If no pages are available from the standby list the OS will write out a page from the modified list so it can reuse it - this is the only time that eager swapping occurs. When the OS decides, periodically or if memory demands get too great, that a working set is too big, the least-recently-used pages will be trimmed, that is, the corresponding page table entries will be marked inactive. (The processor will then generate a page fault the next time any code touches the page.) If modified since last written (tracked automatically by the processor setting a bit in the page table entry), the page goes onto the modified list, otherwise onto the standby list. Background threads then lazily write back data from the modified list at which point the pages are put on the standby list. However, the page table entry still contains information about which physical page was used. When a page fault occurs, the page fault handler code first checks whether the data is still actually in memory on the standby or modified lists and if so, simply fixes up the PTE to be valid again, takes the page off the corresponding list, and dismisse

                    L Offline
                    L Offline
                    Luis Alonso Ramos
                    wrote on last edited by
                    #37

                    Wow Mike, great post! I just ordered Windows Internals (the Windows Internals I read was written in 1993 by Matt Pietrek about Win3.1). I guess it will be an interesting read. Thanks for your post, it really cleared many things up. I'll bookmark this in my blog in case I want to refer to it later. :)

                    Luis Alonso Ramos Intelectix Chihuahua, Mexico

                    My Blog!

                    1 Reply Last reply
                    0
                    • P Patrick Etc

                      John C wrote:

                      This got me curious as to why it does use approx 1gb of memory after boot and here is at least in part the answer:

                      This seems like a fantastically bad idea to me. Ok, it may work well for the "average" user who is almost never going to run a high-memory-load application like a game or Visual Studio, but for everyone else it's going to get in the way when the app you're loading suddenly asks for 500MB or 1GB of RAM. Not to mention that there is a distinct benefit to keeping some of that RAM unused, even if it is powered - power usage and hardware life expectancy. It's an idea that makes sense, and yet, it seems to be lacking something..


                      It has become appallingly obvious that our technology has exceeded our humanity. - Albert Einstein

                      G Offline
                      G Offline
                      Glenn Dawson
                      wrote on last edited by
                      #38

                      Here's my superfetch adventure. I used Daemon tools to mount the MSDN ISO image for Visual Studio, which was about 3.5 GB, in order to install it. After unmounting the image and a reboot, the system was fairly unresponsive for several minutes from constant disk access. Apparently it was fetching the image file into memory; resource monitor confirmed this.

                      1 Reply Last reply
                      0
                      • P Paul Sanders the other one

                        You're right - caching is important. Windows would run like a dog without it. It's easy to demonstrate just how important it is, like so: 1. Restart your computer. 2. WAIT, until the hard disk light stops flashing. 3. Start up IE and time how long it takes. 4. Shut down IE, then start it up again, again timing how long it takes. The difference is startling. That's caching for you. Utilising all your RAM on the offchance that it might avoid a disk access is a no-brainer. The clever part is deciding what to hang on to and what to throw away. Is Vista driving down the cost of RAM, like XP once did? Methinks it is.

                        Paul Sanders http://www.alpinesoft.co.uk

                        G Offline
                        G Offline
                        Glenn Dawson
                        wrote on last edited by
                        #39

                        My problem is with step 2. I don't like waiting after every reboot. Does caching multiple gigabytes of stuff, instead of the most frequently accessed hundreds of megabytes, make it that much faster? I've disabled superfetch, and I've yet to feel that IE, Visual Studio, or Photoshop could load faster. They still get cached the standard way, so opening them a second time is faster than first-start.

                        P 1 Reply Last reply
                        0
                        • T Thunderbox666

                          John C wrote:

                          Sound familiar?

                          Nope :-D I would have been just starting primary school lol Im still only 19


                          "There are three sides to every story. Yours, mine and the truth" ~ unknown

                          D Offline
                          D Offline
                          Dan Neely
                          wrote on last edited by
                          #40

                          I was an HS freshman when 95 came out, it took about ~2y for it to become accepted in the gaming world, mainly because it gave enough extra stuff over dos/3.1 that eventually the game devs all started writing for it. I don't recall a big snowball over 98, nor do I recall anyone good mouthing ME. OTOH everyone who didn't like 9x was jumping onto 2k instead. Unless you want to count win32/64 there isn't a parallel codebase to be called better this time around. 2k to XP had major numbers of holdouts for several years. eventually as hardware got to the point that it wasn't 2s vs 1s but .2s vs .1s the overhead became unnoticeable small, undercutting everything except the 'it looks ugly' argument. Eventually the same thing will happen with vista; probably combined with a gradual shift in the gaming world from writing DX9 and adding a few DX10 extras but keeping everyhing else DX9 regardless to writing DX10 primarily and only implementing basic features in DX9 which'll pressure gamers into switching hard. ALternately the need for 64bit ram addressing might do the same. Like XP, vista has a looks like 2k and prior option, although oddly enough not a looks like XP one. that's something of a pity since my preferred start menu would be XP with search. Vistas search box only works if you have an idea what the app is called. At times though actually eyeballing is needed to jog the memory and my home XP machine has a ~2600 pixel tall start menu (3 1200px columns). Vistas short scrolling list doesn't work well there.

                          Otherwise [Microsoft is] toast in the long term no matter how much money they've got. They would be already if the Linux community didn't have it's head so firmly up it's own command line buffer that it looks like taking 15 years to find the desktop. -- Matthew Faithfull

                          1 Reply Last reply
                          0
                          • G Glenn Dawson

                            My problem is with step 2. I don't like waiting after every reboot. Does caching multiple gigabytes of stuff, instead of the most frequently accessed hundreds of megabytes, make it that much faster? I've disabled superfetch, and I've yet to feel that IE, Visual Studio, or Photoshop could load faster. They still get cached the standard way, so opening them a second time is faster than first-start.

                            P Offline
                            P Offline
                            Paul Sanders the other one
                            wrote on last edited by
                            #41

                            Checkout XP's hibernate function. Resuming from hibernation is much faster that booting from scratch (ask any hedgehog).

                            Paul Sanders http://www.alpinesoft.co.uk

                            G 1 Reply Last reply
                            0
                            • P Paul Sanders the other one

                              Checkout XP's hibernate function. Resuming from hibernation is much faster that booting from scratch (ask any hedgehog).

                              Paul Sanders http://www.alpinesoft.co.uk

                              G Offline
                              G Offline
                              Glenn Dawson
                              wrote on last edited by
                              #42

                              I meant from things like WindowsUpdate. :) Without SuperFetch trying to load gigs of data at boot time, my system usually starts in 30 secs or so, which is fast enough for me.

                              1 Reply Last reply
                              0
                              Reply
                              • Reply as topic
                              Log in to reply
                              • Oldest to Newest
                              • Newest to Oldest
                              • Most Votes


                              • Login

                              • Don't have an account? Register

                              • Login or register to search.
                              • First post
                                Last post
                              0
                              • Categories
                              • Recent
                              • Tags
                              • Popular
                              • World
                              • Users
                              • Groups