Skip to content
  • Categories
  • Recent
  • Tags
  • Popular
  • World
  • Users
  • Groups
Skins
  • Light
  • Cerulean
  • Cosmo
  • Flatly
  • Journal
  • Litera
  • Lumen
  • Lux
  • Materia
  • Minty
  • Morph
  • Pulse
  • Sandstone
  • Simplex
  • Sketchy
  • Spacelab
  • United
  • Yeti
  • Zephyr
  • Dark
  • Cyborg
  • Darkly
  • Quartz
  • Slate
  • Solar
  • Superhero
  • Vapor

  • Default (No Skin)
  • No Skin
Collapse
Code Project
  1. Home
  2. General Programming
  3. C#
  4. What's going on with this performance change?

What's going on with this performance change?

Scheduled Pinned Locked Moved C#
csharpasp-netsharepointdebuggingperformance
9 Posts 4 Posters 0 Views 1 Watching
  • Oldest to Newest
  • Newest to Oldest
  • Most Votes
Reply
  • Reply as topic
Log in to reply
This topic has been deleted. Only users with topic management privileges can see it.
  • K Offline
    K Offline
    KP Lee
    wrote on last edited by
    #1

    I wanted to see how close the performance of a loop in C# is to something written in assembly.

            uint x = int.MaxValue;
            DateTime st = DateTime.Now;
            for (uint i = 0; i < x; i++) ;
            TimeSpan sp = new TimeSpan(DateTime.Now.Ticks - st.Ticks);
            Console.WriteLine("looped: {0} in {1}", x, sp);
    

    In Debug it ran in 6+ seconds. Oh, idiot! Compile it! Ran in 2.1+ to 2.2 seconds. It's rated at 2.1 GHz so it should have finished in just under 2 seconds, so close enough to assembler speeds. Did some more changes elsewhere in the code and used uint to define x. Wha?? Over 6 seconds, went back to int, now it is 3 seconds. looped: 4294967295 in 00:00:06.2643583 looped: 2147483647 in 00:00:03.0888054 It's like it changed from 2 flops per loop to 3 flops. When I checked performance logs, it runs at 50% CPU on a dual core machine, so it is showing 100% of one CPU is dedicated to this loop for the 3 seconds. Any ideas why changes in other sections of logic might affect this piece? I'm normally running 90-97% idle with occasional dips to the 80's. Modified: 1. Subject to be more specific 2. added last line of code segment because it was missing.

    L M 2 Replies Last reply
    0
    • K KP Lee

      I wanted to see how close the performance of a loop in C# is to something written in assembly.

              uint x = int.MaxValue;
              DateTime st = DateTime.Now;
              for (uint i = 0; i < x; i++) ;
              TimeSpan sp = new TimeSpan(DateTime.Now.Ticks - st.Ticks);
              Console.WriteLine("looped: {0} in {1}", x, sp);
      

      In Debug it ran in 6+ seconds. Oh, idiot! Compile it! Ran in 2.1+ to 2.2 seconds. It's rated at 2.1 GHz so it should have finished in just under 2 seconds, so close enough to assembler speeds. Did some more changes elsewhere in the code and used uint to define x. Wha?? Over 6 seconds, went back to int, now it is 3 seconds. looped: 4294967295 in 00:00:06.2643583 looped: 2147483647 in 00:00:03.0888054 It's like it changed from 2 flops per loop to 3 flops. When I checked performance logs, it runs at 50% CPU on a dual core machine, so it is showing 100% of one CPU is dedicated to this loop for the 3 seconds. Any ideas why changes in other sections of logic might affect this piece? I'm normally running 90-97% idle with occasional dips to the 80's. Modified: 1. Subject to be more specific 2. added last line of code segment because it was missing.

      L Offline
      L Offline
      Lost User
      wrote on last edited by
      #2

      You cannot rely on DateTime values to time in a multi-tasking operating system. You are measuring elapsed time, which will include any switching between applications and OS overhead.

      Veni, vidi, abiit domum

      K 1 Reply Last reply
      0
      • K KP Lee

        I wanted to see how close the performance of a loop in C# is to something written in assembly.

                uint x = int.MaxValue;
                DateTime st = DateTime.Now;
                for (uint i = 0; i < x; i++) ;
                TimeSpan sp = new TimeSpan(DateTime.Now.Ticks - st.Ticks);
                Console.WriteLine("looped: {0} in {1}", x, sp);
        

        In Debug it ran in 6+ seconds. Oh, idiot! Compile it! Ran in 2.1+ to 2.2 seconds. It's rated at 2.1 GHz so it should have finished in just under 2 seconds, so close enough to assembler speeds. Did some more changes elsewhere in the code and used uint to define x. Wha?? Over 6 seconds, went back to int, now it is 3 seconds. looped: 4294967295 in 00:00:06.2643583 looped: 2147483647 in 00:00:03.0888054 It's like it changed from 2 flops per loop to 3 flops. When I checked performance logs, it runs at 50% CPU on a dual core machine, so it is showing 100% of one CPU is dedicated to this loop for the 3 seconds. Any ideas why changes in other sections of logic might affect this piece? I'm normally running 90-97% idle with occasional dips to the 80's. Modified: 1. Subject to be more specific 2. added last line of code segment because it was missing.

        M Offline
        M Offline
        Matt T Heffron
        wrote on last edited by
        #3

        Like Richard said, DateTime is not good for this type of timing. Use System.Diagnostics.Stopwatch:

        uint x = int.MaxValue;
        System.Diagnostics.Stopwatch st = System.Diagnostics.Stopwatch.StartNew();
        for (uint i = 0; i < x; i++) ;
        st.Stop();
        TimeSpan sp = TimeSpan.FromTicks(st.ElapsedTicks);
        Console.WriteLine("looped: {0} in {1}", x, sp);

        K 1 Reply Last reply
        0
        • L Lost User

          You cannot rely on DateTime values to time in a multi-tasking operating system. You are measuring elapsed time, which will include any switching between applications and OS overhead.

          Veni, vidi, abiit domum

          K Offline
          K Offline
          KP Lee
          wrote on last edited by
          #4

          Richard MacCutchan wrote:

          ...DateTime values to time in a multi-tasking operating system.

          True, which is why I accepted slightly less than optimum theoretical performance when it ran in 2+ seconds. But this shifted to 150% slower performance. I added a performance monitor which added a slightly higher load, but when I ran the code again, it took 50% of the CPU, which indicated one of the two CPUs had 100% dedication to this thread. To account for 150% slow-down because of load-balancing, it should have taken 33-34 percent of the load consistently over the three seconds it ran or switching between 50 and 10 %. Instead, it is acting as if the process changed from doing 2 floating point operations per loop to 3. If it always acted the slower way, then I'd have to accept that C# puts in an unneeded additional operation I can't control into a looping process. If I can't use elapsed time, is there any other method to capture the efficiency of the looping mechanism? Another thing I noticed is that even though it measures 1 tick as 100 nanoseconds, measuring the time it takes shifts from 0 to 1 millisecond increments with the 10K ticks to 1 MS slightly changing. That tells me "Now" retrieves a timed update, not executing an immediate update of the time.

          1 Reply Last reply
          0
          • M Matt T Heffron

            Like Richard said, DateTime is not good for this type of timing. Use System.Diagnostics.Stopwatch:

            uint x = int.MaxValue;
            System.Diagnostics.Stopwatch st = System.Diagnostics.Stopwatch.StartNew();
            for (uint i = 0; i < x; i++) ;
            st.Stop();
            TimeSpan sp = TimeSpan.FromTicks(st.ElapsedTicks);
            Console.WriteLine("looped: {0} in {1}", x, sp);

            K Offline
            K Offline
            KP Lee
            wrote on last edited by
            #5

            Thanks, it's good to know about Stopwatch, but doesn't explain a 150% shift in performance. (If it changed from 2 to 3 milliseconds, I wouldn't bat an eye, I'd expect that with DateTime.) This is interesting: looped: 4294967295 in 00:00:01.0834284

                    System.Diagnostics.Stopwatch st = new System.Diagnostics.Stopwatch();
                    st.Start();
                    for (uint i = 0; i < x; i++) ;
                    st.Stop();
                    TimeSpan sp = new TimeSpan(st.ElapsedTicks);
                    Console.WriteLine("looped: {0} in {1}", x, sp);
            

            Don't have StartNew on my 2010 express version and Start blows up with a void assignment to st. Ran a stopwatch, got 5.93 seconds and I was slow on the trigger

                    DateTime tim = DateTime.Now;
                    System.Diagnostics.Stopwatch st = new System.Diagnostics.Stopwatch();
                    st.Start();
                    for (uint i = 0; i < x; i++) ;
                    st.Stop();
                    TimeSpan sp = new TimeSpan(DateTime.Now.Ticks - tim.Ticks);
                    Console.WriteLine("looped: {0} in {1}", x, sp);
                    sp = new TimeSpan(st.ElapsedTicks);
                    Console.WriteLine("looped: {0} in {1}", x, sp);
            

            Produced: looped: 4294967295 in 00:00:04.1742388 looped: 4294967295 in 00:00:00.8532933 So, DateTime came up with a time that theory expects for that loop and the stop watch came up with 5 times faster than theory expects and what I am seeing. Gee, which one do you think I should trust? One that is slightly faster than my physical reflexes and closely aligns with theory or one that defies theory and observed behavior? I'm guessing the stopwatch was trying to share the same CPU that needed 100% attention on the code it was running and threw out the timer. So, putting changes that if anything should slow down the code works 150% better today? I don't get it. Arrrg, pull out the stopwatch code and: looped: 4294967295 in 00:00:06.2273562 put it back in: looped: 4294967295 in 00:00:04.1612380 looped: 4294967295 in 00:00:00.8508048 Comment out the start and stop and... looped: 4294967295 in 00:00:04.1622381

            B 1 Reply Last reply
            0
            • K KP Lee

              Thanks, it's good to know about Stopwatch, but doesn't explain a 150% shift in performance. (If it changed from 2 to 3 milliseconds, I wouldn't bat an eye, I'd expect that with DateTime.) This is interesting: looped: 4294967295 in 00:00:01.0834284

                      System.Diagnostics.Stopwatch st = new System.Diagnostics.Stopwatch();
                      st.Start();
                      for (uint i = 0; i < x; i++) ;
                      st.Stop();
                      TimeSpan sp = new TimeSpan(st.ElapsedTicks);
                      Console.WriteLine("looped: {0} in {1}", x, sp);
              

              Don't have StartNew on my 2010 express version and Start blows up with a void assignment to st. Ran a stopwatch, got 5.93 seconds and I was slow on the trigger

                      DateTime tim = DateTime.Now;
                      System.Diagnostics.Stopwatch st = new System.Diagnostics.Stopwatch();
                      st.Start();
                      for (uint i = 0; i < x; i++) ;
                      st.Stop();
                      TimeSpan sp = new TimeSpan(DateTime.Now.Ticks - tim.Ticks);
                      Console.WriteLine("looped: {0} in {1}", x, sp);
                      sp = new TimeSpan(st.ElapsedTicks);
                      Console.WriteLine("looped: {0} in {1}", x, sp);
              

              Produced: looped: 4294967295 in 00:00:04.1742388 looped: 4294967295 in 00:00:00.8532933 So, DateTime came up with a time that theory expects for that loop and the stop watch came up with 5 times faster than theory expects and what I am seeing. Gee, which one do you think I should trust? One that is slightly faster than my physical reflexes and closely aligns with theory or one that defies theory and observed behavior? I'm guessing the stopwatch was trying to share the same CPU that needed 100% attention on the code it was running and threw out the timer. So, putting changes that if anything should slow down the code works 150% better today? I don't get it. Arrrg, pull out the stopwatch code and: looped: 4294967295 in 00:00:06.2273562 put it back in: looped: 4294967295 in 00:00:04.1612380 looped: 4294967295 in 00:00:00.8508048 Comment out the start and stop and... looped: 4294967295 in 00:00:04.1622381

              B Offline
              B Offline
              Bernhard Hiller
              wrote on last edited by
              #6

              Thanks for sharing your experiences. It is really nice to see how unreliable Windows is when you need a real-time experience.

              K 1 Reply Last reply
              0
              • B Bernhard Hiller

                Thanks for sharing your experiences. It is really nice to see how unreliable Windows is when you need a real-time experience.

                K Offline
                K Offline
                KP Lee
                wrote on last edited by
                #7

                Bernhard Hiller wrote:

                It is really nice to see how unreliable Windows is when you need a real-time experience.

                Microsoft products are the core of this problem, but I wouldn't blame the OS, this is all CLR. It's interesting that seemingly unrelated changes cause performance to go in and out of the gutter. What's ironic is that calling on Diagnostics should drag performance down because it's sort of like Heisenberg's principle, the act of observing can't be done without affecting what is being observed, but instead it brings performance up to where it should be by removing some change that shouldn't affect performance, yet does. Too bad the tool is so unreliable that you can't depend on what it says. At least on my express version. Wouldn't that be something if turning on diagnostics actually caused performance to go beyond the theoretical limit! Of course, then I would suspect it wasn't executing the loop at all.

                B 1 Reply Last reply
                0
                • K KP Lee

                  Bernhard Hiller wrote:

                  It is really nice to see how unreliable Windows is when you need a real-time experience.

                  Microsoft products are the core of this problem, but I wouldn't blame the OS, this is all CLR. It's interesting that seemingly unrelated changes cause performance to go in and out of the gutter. What's ironic is that calling on Diagnostics should drag performance down because it's sort of like Heisenberg's principle, the act of observing can't be done without affecting what is being observed, but instead it brings performance up to where it should be by removing some change that shouldn't affect performance, yet does. Too bad the tool is so unreliable that you can't depend on what it says. At least on my express version. Wouldn't that be something if turning on diagnostics actually caused performance to go beyond the theoretical limit! Of course, then I would suspect it wasn't executing the loop at all.

                  B Offline
                  B Offline
                  Bernhard Hiller
                  wrote on last edited by
                  #8

                  By the way, did you see that statement in the documentation at http://msdn.microsoft.com/en-us/library/system.diagnostics.stopwatch%28v=vs.110%29.aspx[^]: "However, because of bugs in the BIOS or the Hardware Abstraction Layer (HAL), you can get different timing results on different processors."

                  K 1 Reply Last reply
                  0
                  • B Bernhard Hiller

                    By the way, did you see that statement in the documentation at http://msdn.microsoft.com/en-us/library/system.diagnostics.stopwatch%28v=vs.110%29.aspx[^]: "However, because of bugs in the BIOS or the Hardware Abstraction Layer (HAL), you can get different timing results on different processors."

                    K Offline
                    K Offline
                    KP Lee
                    wrote on last edited by
                    #9

                    Read Documentation???? What a novel idea :laugh: It's pretty obvious that the default thread affinity is to use the same processor the creation thread is using. Don't know how to get a list of processors or which one is being used by the main thread. DateTime seems to do very well at picking a thread that is idle to make its timing updates. So the recommended process produces wildly inaccurate results while the not recommended process seems to produce fairly accurate results. I'll stick with the not recommended process. PS Thanks for the link to the online documentation.

                    1 Reply Last reply
                    0
                    Reply
                    • Reply as topic
                    Log in to reply
                    • Oldest to Newest
                    • Newest to Oldest
                    • Most Votes


                    • Login

                    • Don't have an account? Register

                    • Login or register to search.
                    • First post
                      Last post
                    0
                    • Categories
                    • Recent
                    • Tags
                    • Popular
                    • World
                    • Users
                    • Groups