Skip to content
  • Categories
  • Recent
  • Tags
  • Popular
  • World
  • Users
  • Groups
Skins
  • Light
  • Cerulean
  • Cosmo
  • Flatly
  • Journal
  • Litera
  • Lumen
  • Lux
  • Materia
  • Minty
  • Morph
  • Pulse
  • Sandstone
  • Simplex
  • Sketchy
  • Spacelab
  • United
  • Yeti
  • Zephyr
  • Dark
  • Cyborg
  • Darkly
  • Quartz
  • Slate
  • Solar
  • Superhero
  • Vapor

  • Default (No Skin)
  • No Skin
Collapse
Code Project
  1. Home
  2. General Programming
  3. C#
  4. Speed of execution

Speed of execution

Scheduled Pinned Locked Moved C#
csharpquestionc++visual-studiocom
14 Posts 6 Posters 0 Views 1 Watching
  • Oldest to Newest
  • Newest to Oldest
  • Most Votes
Reply
  • Reply as topic
Log in to reply
This topic has been deleted. Only users with topic management privileges can see it.
  • K Kevin13

    I have just come to C# after many years of C/C++ and am a bit surprised at the relatively slow speed of C# execution. The following is very simple test piece of code (completely useless) implemented in C# and C. Both tested on the same PC running Windows 7 Professional 64bit The C# returns between 30 and 45 ms and the C/C++ a stable 4.2 ms. Is there anything I can do with my C# code to speed it up? C# Visual Studio 2010 Release Build Optimize On x86

            Stopwatch time = new Stopwatch();
            int size = 10000000;
            byte\[\] data = new byte\[size\];
            unsafe
            {
                fixed (byte\* ptr = data)
                {
                    time.Restart();
                    for (int i = 0; i < size; i++)
                        ptr\[i\] = 128;
                    time.Stop();
                }
            }
            var elapsed = 1000.0 \* time.ElapsedTicks / Stopwatch.Frequency;
            Console.WriteLine("unsafe time " + elapsed.ToString() +" ms");
    

    C++ Visual Studio 2008 Release Build x86

    int size = 10000000;
    unsigned char\* pData = new unsigned char\[size\];
    QueryPerformanceCounter(&start);
    for(int i=0;i
    

    I know I could make both faster with threading but question is really about the execution speed.

    Finally after all the postings...

    Thanks everyone for their comments suggestions C# is good for a large number of tasks and C/C++ is more appropriate for a different set of tasks.

    PS I have not been active in the last week - some places in the world have no wifi or 3G ! http://s.codeproject.com/script/Forums/Images/smiley\_smile.gif

    M Offline
    M Offline
    Matt Meyer
    wrote on last edited by
    #3

    You have some goofy stuff in there that shouldn't be needed. You're creating a pointer to your array (which requires the unsafe context), then you use your pointer as if it were the array:

    byte[] data = new byte[size];
    [...]
    unsafe{
    fixed (byte* ptr = data) {
    [...]
    for (int i = 0; i < size; i++)
    ptr[i] = 128;
    [...]
    }
    }

    All of that should be condensed down to this:

    byte[] data = new byte[size];
    [...]
    for (int i = 0; i < size; i++)
    data[i] = 128;

    Give that a whirl and see how the speeds match up. I'm not sure what performance implications unsafe contexts create.

    K 1 Reply Last reply
    0
    • M Matt Meyer

      You have some goofy stuff in there that shouldn't be needed. You're creating a pointer to your array (which requires the unsafe context), then you use your pointer as if it were the array:

      byte[] data = new byte[size];
      [...]
      unsafe{
      fixed (byte* ptr = data) {
      [...]
      for (int i = 0; i < size; i++)
      ptr[i] = 128;
      [...]
      }
      }

      All of that should be condensed down to this:

      byte[] data = new byte[size];
      [...]
      for (int i = 0; i < size; i++)
      data[i] = 128;

      Give that a whirl and see how the speeds match up. I'm not sure what performance implications unsafe contexts create.

      K Offline
      K Offline
      Kevin13
      wrote on last edited by
      #4

      Thank you. All the unsafe bits and pointers makes no difference to the speed of execution. :) Any ideas how to make it faster?

      M 1 Reply Last reply
      0
      • S Simon Bang Terkildsen

        First of do not use unsafe 1) There is a performance impact, unless you're playing around with P/Invoke or COM interop, even then it's not worth it, in my opinion. 2) I've never seen an example of something that couldn't be done in safe code. You can write examples where C# or even Java has a better execution time than it would in C++. I really don't wanna get into a C++ vs. Managed language war here. have a look at this discussion C++ performance vs. Java/C# Also I suggest you forget about a performance issue untill you actually have a performance issue.

        K Offline
        K Offline
        Kevin13
        wrote on last edited by
        #5

        Thanks. The unsafe bits and pointers are no help in speeding up. Is there a faster way to access/modify large blocks of data. The app currently working on requires a video stream data to be modified and if this is how fast C# is at this sort of processing this is not a problem I will thread this part of the app. I just wanted to make sure I had not missed anything.

        1 Reply Last reply
        0
        • K Kevin13

          I have just come to C# after many years of C/C++ and am a bit surprised at the relatively slow speed of C# execution. The following is very simple test piece of code (completely useless) implemented in C# and C. Both tested on the same PC running Windows 7 Professional 64bit The C# returns between 30 and 45 ms and the C/C++ a stable 4.2 ms. Is there anything I can do with my C# code to speed it up? C# Visual Studio 2010 Release Build Optimize On x86

                  Stopwatch time = new Stopwatch();
                  int size = 10000000;
                  byte\[\] data = new byte\[size\];
                  unsafe
                  {
                      fixed (byte\* ptr = data)
                      {
                          time.Restart();
                          for (int i = 0; i < size; i++)
                              ptr\[i\] = 128;
                          time.Stop();
                      }
                  }
                  var elapsed = 1000.0 \* time.ElapsedTicks / Stopwatch.Frequency;
                  Console.WriteLine("unsafe time " + elapsed.ToString() +" ms");
          

          C++ Visual Studio 2008 Release Build x86

          int size = 10000000;
          unsigned char\* pData = new unsigned char\[size\];
          QueryPerformanceCounter(&start);
          for(int i=0;i
          

          I know I could make both faster with threading but question is really about the execution speed.

          Finally after all the postings...

          Thanks everyone for their comments suggestions C# is good for a large number of tasks and C/C++ is more appropriate for a different set of tasks.

          PS I have not been active in the last week - some places in the world have no wifi or 3G ! http://s.codeproject.com/script/Forums/Images/smiley\_smile.gif

          A Offline
          A Offline
          AspDotNetDev
          wrote on last edited by
          #6

          You can use loop unwinding. Alternatively, rather than indexing the pointer, you can just increment it by one (ptr++) each loop iteration and use a while loop that only checks if the pointer you are currently at matches the ending pointer. You can also combine the two techniques. Would go something like this (some syntax and values may be wrong, but you get the idea):

          byte* ptr = data;
          byte* ptrEnd = data;
          ptrEnd += (size - (size % 10));
          while (ptr != ptrEnd)
          {
          *ptr = 128; ptr++;
          *ptr = 128; ptr++;
          *ptr = 128; ptr++;
          *ptr = 128; ptr++;
          *ptr = 128; ptr++;
          *ptr = 128; ptr++;
          *ptr = 128; ptr++;
          *ptr = 128; ptr++;
          *ptr = 128; ptr++;
          *ptr = 128; ptr++;
          }
          ptrEnd += (size % 10);
          while(ptr != ptrEnd)
          {
          *ptr = 128; ptr++;
          }

          I think there is also some more compact (and hard to understand, but maybe faster) syntax, but I forget it exactly. Something like this:

          *(++ptr) = 128; // Maybe this?
          *(ptr++) = 128; // Or was it this?

          I think C++ C# also has an optimized array.copy function. You can fill in an array of, say, 100 values, then copy that to the "data" array a bunch of times. Or maybe you can copy the array to itself a bunch of times, each time doubling the number of elements you can copy. Would go something like this (I can't be bothered looking up the syntax right now):

          Array.Copy(data, 0, length, data, length, length);

          That would be 1 iteration in a loop... you'd obviously have to change the value of length each iteration. And you'd have to account for edge conditions. And it would only be useful if you were really filling the entire array with the same value.

          Martin Fowler wrote:

          Any fool can write code that a computer can understand. Good programmers write code that humans can understand.

          1 Reply Last reply
          0
          • K Kevin13

            Thank you. All the unsafe bits and pointers makes no difference to the speed of execution. :) Any ideas how to make it faster?

            M Offline
            M Offline
            Matt Meyer
            wrote on last edited by
            #7

            Nope, don't know how to make it faster. I did try a few different loop approaches, but I get the impression they're being optimized down by the compiler into the same thing. They all seem to come out roughly the same speed (within a ms of each other). If you need something faster, you're going to be better off using C/C++. And, just for kicks, I did try using raw pointer arithmetic in an unsafe context. It was still coming out slower then just looping. I found this odd...

            1 Reply Last reply
            0
            • K Kevin13

              I have just come to C# after many years of C/C++ and am a bit surprised at the relatively slow speed of C# execution. The following is very simple test piece of code (completely useless) implemented in C# and C. Both tested on the same PC running Windows 7 Professional 64bit The C# returns between 30 and 45 ms and the C/C++ a stable 4.2 ms. Is there anything I can do with my C# code to speed it up? C# Visual Studio 2010 Release Build Optimize On x86

                      Stopwatch time = new Stopwatch();
                      int size = 10000000;
                      byte\[\] data = new byte\[size\];
                      unsafe
                      {
                          fixed (byte\* ptr = data)
                          {
                              time.Restart();
                              for (int i = 0; i < size; i++)
                                  ptr\[i\] = 128;
                              time.Stop();
                          }
                      }
                      var elapsed = 1000.0 \* time.ElapsedTicks / Stopwatch.Frequency;
                      Console.WriteLine("unsafe time " + elapsed.ToString() +" ms");
              

              C++ Visual Studio 2008 Release Build x86

              int size = 10000000;
              unsigned char\* pData = new unsigned char\[size\];
              QueryPerformanceCounter(&start);
              for(int i=0;i
              

              I know I could make both faster with threading but question is really about the execution speed.

              Finally after all the postings...

              Thanks everyone for their comments suggestions C# is good for a large number of tasks and C/C++ is more appropriate for a different set of tasks.

              PS I have not been active in the last week - some places in the world have no wifi or 3G ! http://s.codeproject.com/script/Forums/Images/smiley\_smile.gif

              L Offline
              L Offline
              Luc Pattyn
              wrote on last edited by
              #8

              Are you sure you want a big array filled with a single constant? if not, I urge you to investigate (and ask questions about) the actual things you want, not some mock-up. If you need a predictable content, Array.Copy would be a prime candidate (fill a smaller array first, then copy it as often as you like). FWIW: No matter what language you use, bigger array elements (int or long) would get copied faster than smaller ones (byte) as the loop count, and the cost of the loop overhead, would decrease. Finally, as a general comment on performance measurements, anyone proficient in language A and new to language B can easily "proof" that A is faster than B, no matter what A and B are. Half of your C# snippet can be scrapped without affecting the results (and possibly the performance either); however automatic optimizations (which exist both in the compiler and the JIT-compiler) work best when there isn't too much ballast to begin with. :)

              Luc Pattyn [My Articles] Nil Volentibus Arduum

              1 Reply Last reply
              0
              • K Kevin13

                I have just come to C# after many years of C/C++ and am a bit surprised at the relatively slow speed of C# execution. The following is very simple test piece of code (completely useless) implemented in C# and C. Both tested on the same PC running Windows 7 Professional 64bit The C# returns between 30 and 45 ms and the C/C++ a stable 4.2 ms. Is there anything I can do with my C# code to speed it up? C# Visual Studio 2010 Release Build Optimize On x86

                        Stopwatch time = new Stopwatch();
                        int size = 10000000;
                        byte\[\] data = new byte\[size\];
                        unsafe
                        {
                            fixed (byte\* ptr = data)
                            {
                                time.Restart();
                                for (int i = 0; i < size; i++)
                                    ptr\[i\] = 128;
                                time.Stop();
                            }
                        }
                        var elapsed = 1000.0 \* time.ElapsedTicks / Stopwatch.Frequency;
                        Console.WriteLine("unsafe time " + elapsed.ToString() +" ms");
                

                C++ Visual Studio 2008 Release Build x86

                int size = 10000000;
                unsigned char\* pData = new unsigned char\[size\];
                QueryPerformanceCounter(&start);
                for(int i=0;i
                

                I know I could make both faster with threading but question is really about the execution speed.

                Finally after all the postings...

                Thanks everyone for their comments suggestions C# is good for a large number of tasks and C/C++ is more appropriate for a different set of tasks.

                PS I have not been active in the last week - some places in the world have no wifi or 3G ! http://s.codeproject.com/script/Forums/Images/smiley\_smile.gif

                L Offline
                L Offline
                Lost User
                wrote on last edited by
                #9

                Kevin13 wrote:

                The C# returns between 30 and 45 ms and the C/C++ a stable 4.2 ms.
                 
                Is there anything I can do with my C# code to speed it up?

                Words of wisdom, you can code in C. You're comparing native code to an interpreted language. Of course it's slower, what'd you expect? It's not a big deal; one can create apps quicker, and the source is more accessible than pure C would be. If .NET seems to be slow on my machine, I know I'm doing something the wrong way (like not reusing objects). If you ever encounter the need for something "faster", then simply call your C-routines from C#.

                Bastard Programmer from Hell :suss:

                A 1 Reply Last reply
                0
                • L Lost User

                  Kevin13 wrote:

                  The C# returns between 30 and 45 ms and the C/C++ a stable 4.2 ms.
                   
                  Is there anything I can do with my C# code to speed it up?

                  Words of wisdom, you can code in C. You're comparing native code to an interpreted language. Of course it's slower, what'd you expect? It's not a big deal; one can create apps quicker, and the source is more accessible than pure C would be. If .NET seems to be slow on my machine, I know I'm doing something the wrong way (like not reusing objects). If you ever encounter the need for something "faster", then simply call your C-routines from C#.

                  Bastard Programmer from Hell :suss:

                  A Offline
                  A Offline
                  AspDotNetDev
                  wrote on last edited by
                  #10

                  Eddy Vluggen wrote:

                  You're comparing native code to an interpreted language

                  C# isn't an interpreted language. It's compiled on the machine it is run on, making it possibly faster than pre-compiled code for a more generic architecture. My guess is that the C++ compiler is using some special optimization (e.g., MMX instructions, or loop unwinding) that the C# compiler didn't happen to implement for this scenario.

                  Martin Fowler wrote:

                  Any fool can write code that a computer can understand. Good programmers write code that humans can understand.

                  L 1 Reply Last reply
                  0
                  • A AspDotNetDev

                    Eddy Vluggen wrote:

                    You're comparing native code to an interpreted language

                    C# isn't an interpreted language. It's compiled on the machine it is run on, making it possibly faster than pre-compiled code for a more generic architecture. My guess is that the C++ compiler is using some special optimization (e.g., MMX instructions, or loop unwinding) that the C# compiler didn't happen to implement for this scenario.

                    Martin Fowler wrote:

                    Any fool can write code that a computer can understand. Good programmers write code that humans can understand.

                    L Offline
                    L Offline
                    Lost User
                    wrote on last edited by
                    #11

                    AspDotNetDev wrote:

                    It's compiled on the machine it is run on, making it possibly faster than pre-compiled code for a more generic architecture.

                    Compiles to bytecode AFAIK, and runs in a VM.

                    AspDotNetDev wrote:

                    My guess is that the C++ compiler is using some special optimization (e.g., MMX instructions, or loop unwinding) that the C# compiler didn't happen to implement for this scenario.

                    No garbage-collector thread in the background, no framework at all.

                    Bastard Programmer from Hell :suss:

                    A 1 Reply Last reply
                    0
                    • L Lost User

                      AspDotNetDev wrote:

                      It's compiled on the machine it is run on, making it possibly faster than pre-compiled code for a more generic architecture.

                      Compiles to bytecode AFAIK, and runs in a VM.

                      AspDotNetDev wrote:

                      My guess is that the C++ compiler is using some special optimization (e.g., MMX instructions, or loop unwinding) that the C# compiler didn't happen to implement for this scenario.

                      No garbage-collector thread in the background, no framework at all.

                      Bastard Programmer from Hell :suss:

                      A Offline
                      A Offline
                      AspDotNetDev
                      wrote on last edited by
                      #12

                      Eddy Vluggen wrote:

                      Compiles to bytecode AFAIK, and runs in a VM.

                      Nope. Compiles to native code and does not use a VM. See http://en.wikipedia.org/wiki/Common_Intermediate_Language#General_information. The DLL contains bytecode, but when (just before, actually) the DLL is run it gets compiled to native code.

                      Eddy Vluggen wrote:

                      No garbage-collector thread in the background, no framework at all.

                      The garbage collector works in stages. It wouldn't slow down performance in general (e.g., 3x slower). In the OP's example, the array would likely be placed on the large object heap and would not cause compaction when the memory is released. Though, one of those stages could have caused the OP's code to appear to run 3x slower, so the OP should probably run a larger test run (say, a second or two).

                      Martin Fowler wrote:

                      Any fool can write code that a computer can understand. Good programmers write code that humans can understand.

                      L 2 Replies Last reply
                      0
                      • A AspDotNetDev

                        Eddy Vluggen wrote:

                        Compiles to bytecode AFAIK, and runs in a VM.

                        Nope. Compiles to native code and does not use a VM. See http://en.wikipedia.org/wiki/Common_Intermediate_Language#General_information. The DLL contains bytecode, but when (just before, actually) the DLL is run it gets compiled to native code.

                        Eddy Vluggen wrote:

                        No garbage-collector thread in the background, no framework at all.

                        The garbage collector works in stages. It wouldn't slow down performance in general (e.g., 3x slower). In the OP's example, the array would likely be placed on the large object heap and would not cause compaction when the memory is released. Though, one of those stages could have caused the OP's code to appear to run 3x slower, so the OP should probably run a larger test run (say, a second or two).

                        Martin Fowler wrote:

                        Any fool can write code that a computer can understand. Good programmers write code that humans can understand.

                        L Offline
                        L Offline
                        Lost User
                        wrote on last edited by
                        #13

                        AspDotNetDev wrote:

                        The DLL contains bytecode, but when (just before, actually) the DLL is run it gets compiled to native code.

                        P-code to native; aight, I won't call it an interpreter again.

                        AspDotNetDev wrote:

                        The garbage collector works in stages. It wouldn't slow down performance in general.

                        It's additional load on the CPU, as well as managing AppDomains is. The application also contains metadata that the C version does not have, and in general the "runtime" will reserve 20 Mb of memory for your app on startup. Compare some string-operations in Delphi to C#. It simply hurts if you're concatenating a lot; but once you learn that strings are immutable and that there's a StringBuilder, things work quite nicely.

                        Bastard Programmer from Hell :suss:

                        1 Reply Last reply
                        0
                        • A AspDotNetDev

                          Eddy Vluggen wrote:

                          Compiles to bytecode AFAIK, and runs in a VM.

                          Nope. Compiles to native code and does not use a VM. See http://en.wikipedia.org/wiki/Common_Intermediate_Language#General_information. The DLL contains bytecode, but when (just before, actually) the DLL is run it gets compiled to native code.

                          Eddy Vluggen wrote:

                          No garbage-collector thread in the background, no framework at all.

                          The garbage collector works in stages. It wouldn't slow down performance in general (e.g., 3x slower). In the OP's example, the array would likely be placed on the large object heap and would not cause compaction when the memory is released. Though, one of those stages could have caused the OP's code to appear to run 3x slower, so the OP should probably run a larger test run (say, a second or two).

                          Martin Fowler wrote:

                          Any fool can write code that a computer can understand. Good programmers write code that humans can understand.

                          L Offline
                          L Offline
                          Lost User
                          wrote on last edited by
                          #14

                          Two comparable snippets;

                          class Program
                          {
                              static List numbersList = new List();
                              static void Main(string\[\] args)
                              {
                                  int start = Environment.TickCount;
                                  while (Environment.TickCount - start < 5000)
                                  {
                                      numbersList.Add("Tick " + Environment.TickCount);
                                  }
                                  Console.WriteLine(numbersList.Count);
                                  Console.ReadKey();
                              }
                          }
                          

                          procedure Test1();
                          var start: dword;
                          numbersList: TStringList;
                          begin
                          numbersList := TStringList.Create();
                          start := GetTickCount();
                          while GetTickCount() - start < 5000 do
                          begin
                          numbersList.Add('Tick ' + IntToStr(GetTickCount()));
                          end;
                          WriteLn(numbersList.Count);
                          numbersList.Clear;
                          numbersList.Free;
                          end;

                          begin
                          Test1();
                          ReadLn();
                          end.

                          C# managed to create 3.413.266 objects (in release mode, set to x86). Lazarus TCreated a 5.295.911 items without knowing about my cpu. FWIW, if I convert the example that the TS provided to Delphi, I come out at 43 ms (included for those that want to run their own tests);

                          procedure Test2();
                          const testSize = 10000000;
                          var start: dword;
                          data : array[0..testSize] of byte;
                          i : integer;
                          begin
                          start := GetTickCount();
                          for i := 0 to testSize do
                          begin
                          data[i] := 128;
                          end;
                          WriteLn('Safe time ' + IntToStr(GetTickCount() - start) + ' ms');
                          end;

                          ..and I might want to add that you should measure using timeGetTime(), not using GetTickCount. Wish I had VB6 here to give it a try, in both native and P-code versions. My first boss would disagree with the provided testcode, as there's a more efficient version;

                          procedure Test2();
                          const testSize = 10000000;
                          var start: dword;
                          data : array[0..testSize] of byte;
                          begin
                          start := GetTickCount();
                          FillChar(data, SizeOf(data), 128);
                          WriteLn('Safe time ' + IntToStr(GetTickCount() - start) + ' ms');
                          end;

                          This does it in 15 ms. That doesn't mean I'm switching back! .NET does the most things in an effective way, despite all the metadata and the wrappers. Plus, it provides a lot of convenience. We've got profilers that work a lot better than the old timeGetTime API, and we got a lot of goodies that support building an application quickly, with a damn good performance.

                          Bastard Programmer from Hell :suss:

                          1 Reply Last reply
                          0
                          Reply
                          • Reply as topic
                          Log in to reply
                          • Oldest to Newest
                          • Newest to Oldest
                          • Most Votes


                          • Login

                          • Don't have an account? Register

                          • Login or register to search.
                          • First post
                            Last post
                          0
                          • Categories
                          • Recent
                          • Tags
                          • Popular
                          • World
                          • Users
                          • Groups