C# or C - which is faster?
-
I ran the following (same) 2 small programs in C#.NET and then in C (Bloodshed Dev C/C++). The latter took more time than the former (4.7s vs 3.8s) - I thought this was counter-intuitive, and that a code written in C should run faster... Any Ideas? C#: --- public static void Main() { // variables int i, i1=0; int[] ar=new int[1000]; long l1, l2; // first for loop - 1Billion iterations for (i=0;i<1000000000;i++) { ar[i1]=i; i1++; if (i1==1000) i1=0; } // measure time l1=DateTime.Now.Ticks; i1=0; // second for loop - 1Billion iterations for (i=0;i<1000000000;i++) { ar[i1]=i; i1++; if (i1==1000) i1=0; } // measure time l2=DateTime.Now.Ticks; // print results Console.WriteLine("Time befre: {0}",l1); Console.WriteLine("Time after: {0}",l2); l2-=l1; Console.WriteLine("Time taken in seconds: {0}",(double)l2/10000000); Console.ReadLine(); } C: -- #include main() { // variables int i, i1=0; int ar[1000]; clock_t c1, c2; // first for loop - 1Billion iterations for (i=0;i<1000000000;i++) { ar[i1]=i; i1++; if (i1==1000) i1=0; } // measure time c1=clock(); i1=0; // second for loop - 1Billion iterations for (i=0;i<1000000000;i++) { ar[i1]=i; i1++; if (i1==1000) i1=0; } // measure time c2=clock(); // print results printf("Time befre: %d\n",c1); printf("Time after: %d\n",c2); c2-=c1; printf("Time taken in seconds: %f",(float)c2/1000); getchar(); }
-
I ran the following (same) 2 small programs in C#.NET and then in C (Bloodshed Dev C/C++). The latter took more time than the former (4.7s vs 3.8s) - I thought this was counter-intuitive, and that a code written in C should run faster... Any Ideas? C#: --- public static void Main() { // variables int i, i1=0; int[] ar=new int[1000]; long l1, l2; // first for loop - 1Billion iterations for (i=0;i<1000000000;i++) { ar[i1]=i; i1++; if (i1==1000) i1=0; } // measure time l1=DateTime.Now.Ticks; i1=0; // second for loop - 1Billion iterations for (i=0;i<1000000000;i++) { ar[i1]=i; i1++; if (i1==1000) i1=0; } // measure time l2=DateTime.Now.Ticks; // print results Console.WriteLine("Time befre: {0}",l1); Console.WriteLine("Time after: {0}",l2); l2-=l1; Console.WriteLine("Time taken in seconds: {0}",(double)l2/10000000); Console.ReadLine(); } C: -- #include main() { // variables int i, i1=0; int ar[1000]; clock_t c1, c2; // first for loop - 1Billion iterations for (i=0;i<1000000000;i++) { ar[i1]=i; i1++; if (i1==1000) i1=0; } // measure time c1=clock(); i1=0; // second for loop - 1Billion iterations for (i=0;i<1000000000;i++) { ar[i1]=i; i1++; if (i1==1000) i1=0; } // measure time c2=clock(); // print results printf("Time befre: %d\n",c1); printf("Time after: %d\n",c2); c2-=c1; printf("Time taken in seconds: %f",(float)c2/1000); getchar(); }
This ought to mess with you a little bit - NEITHER is faster than the other. Actually, it depends entirely on the code you write and environment it's running under. C++ will be faster for some things, while C# for others. RageInTheMachine9532 "...a pungent, ghastly, stinky piece of cheese!" -- The Roaming Gnome
-
I ran the following (same) 2 small programs in C#.NET and then in C (Bloodshed Dev C/C++). The latter took more time than the former (4.7s vs 3.8s) - I thought this was counter-intuitive, and that a code written in C should run faster... Any Ideas? C#: --- public static void Main() { // variables int i, i1=0; int[] ar=new int[1000]; long l1, l2; // first for loop - 1Billion iterations for (i=0;i<1000000000;i++) { ar[i1]=i; i1++; if (i1==1000) i1=0; } // measure time l1=DateTime.Now.Ticks; i1=0; // second for loop - 1Billion iterations for (i=0;i<1000000000;i++) { ar[i1]=i; i1++; if (i1==1000) i1=0; } // measure time l2=DateTime.Now.Ticks; // print results Console.WriteLine("Time befre: {0}",l1); Console.WriteLine("Time after: {0}",l2); l2-=l1; Console.WriteLine("Time taken in seconds: {0}",(double)l2/10000000); Console.ReadLine(); } C: -- #include main() { // variables int i, i1=0; int ar[1000]; clock_t c1, c2; // first for loop - 1Billion iterations for (i=0;i<1000000000;i++) { ar[i1]=i; i1++; if (i1==1000) i1=0; } // measure time c1=clock(); i1=0; // second for loop - 1Billion iterations for (i=0;i<1000000000;i++) { ar[i1]=i; i1++; if (i1==1000) i1=0; } // measure time c2=clock(); // print results printf("Time befre: %d\n",c1); printf("Time after: %d\n",c2); c2-=c1; printf("Time taken in seconds: %f",(float)c2/1000); getchar(); }
I'll add that in this case your use of different development environments could potentially be a larger factor than c vs c#. While the relative performance will depend on what you're trying to do, a more meaningful comparison would be achieved by comparing c#s results with your C code ran using vc. PS sharing benchmarks apparently violates the .net eula. Don't you feel horrible now. **rolls eyes**
-
This ought to mess with you a little bit - NEITHER is faster than the other. Actually, it depends entirely on the code you write and environment it's running under. C++ will be faster for some things, while C# for others. RageInTheMachine9532 "...a pungent, ghastly, stinky piece of cheese!" -- The Roaming Gnome
-
maybe, but in case you make the same action, C will be faster as it doesn't use the .NET framework, and so, don't need to be reinterpreted from MSIL to assembly...
TOXCCT >>> GEII power
[toxcct][VisualCalc]toxcct wrote: C will be faster as it doesn't use the .NET framework Don't count it! The JIT compiler compiles code method-by-method on the first call only. After that, it doesn't need to be recompiled every time it's called. Plus, the JIT is very fast at it's job and has an advantage that the C++ compiler can never have. It knows at runtime exactly which processor its running on and it capabilities and can generate optimized native code specifically for that processor. This code has the possibility of exceeding the performance of the C++-based native code that is generated for a more general range of processors. Also, every C/C++ program out there has its own runtime also. It's just built into the app. Depending on what features your using in Windows, it also can have the MFC libraries. RageInTheMachine9532 "...a pungent, ghastly, stinky piece of cheese!" -- The Roaming Gnome
-
I ran the following (same) 2 small programs in C#.NET and then in C (Bloodshed Dev C/C++). The latter took more time than the former (4.7s vs 3.8s) - I thought this was counter-intuitive, and that a code written in C should run faster... Any Ideas? C#: --- public static void Main() { // variables int i, i1=0; int[] ar=new int[1000]; long l1, l2; // first for loop - 1Billion iterations for (i=0;i<1000000000;i++) { ar[i1]=i; i1++; if (i1==1000) i1=0; } // measure time l1=DateTime.Now.Ticks; i1=0; // second for loop - 1Billion iterations for (i=0;i<1000000000;i++) { ar[i1]=i; i1++; if (i1==1000) i1=0; } // measure time l2=DateTime.Now.Ticks; // print results Console.WriteLine("Time befre: {0}",l1); Console.WriteLine("Time after: {0}",l2); l2-=l1; Console.WriteLine("Time taken in seconds: {0}",(double)l2/10000000); Console.ReadLine(); } C: -- #include main() { // variables int i, i1=0; int ar[1000]; clock_t c1, c2; // first for loop - 1Billion iterations for (i=0;i<1000000000;i++) { ar[i1]=i; i1++; if (i1==1000) i1=0; } // measure time c1=clock(); i1=0; // second for loop - 1Billion iterations for (i=0;i<1000000000;i++) { ar[i1]=i; i1++; if (i1==1000) i1=0; } // measure time c2=clock(); // print results printf("Time befre: %d\n",c1); printf("Time after: %d\n",c2); c2-=c1; printf("Time taken in seconds: %f",(float)c2/1000); getchar(); }
These contrived examples will never accurately reflect speed of real world apps, and real world apps will differ in performance based on how well they are written. A well designed C# app will probably out perform a poorly written C++ app, although I'd tend to say that C++ will be faster than C# overall. Christian Graus - Microsoft MVP - C++
-
I ran the following (same) 2 small programs in C#.NET and then in C (Bloodshed Dev C/C++). The latter took more time than the former (4.7s vs 3.8s) - I thought this was counter-intuitive, and that a code written in C should run faster... Any Ideas? C#: --- public static void Main() { // variables int i, i1=0; int[] ar=new int[1000]; long l1, l2; // first for loop - 1Billion iterations for (i=0;i<1000000000;i++) { ar[i1]=i; i1++; if (i1==1000) i1=0; } // measure time l1=DateTime.Now.Ticks; i1=0; // second for loop - 1Billion iterations for (i=0;i<1000000000;i++) { ar[i1]=i; i1++; if (i1==1000) i1=0; } // measure time l2=DateTime.Now.Ticks; // print results Console.WriteLine("Time befre: {0}",l1); Console.WriteLine("Time after: {0}",l2); l2-=l1; Console.WriteLine("Time taken in seconds: {0}",(double)l2/10000000); Console.ReadLine(); } C: -- #include main() { // variables int i, i1=0; int ar[1000]; clock_t c1, c2; // first for loop - 1Billion iterations for (i=0;i<1000000000;i++) { ar[i1]=i; i1++; if (i1==1000) i1=0; } // measure time c1=clock(); i1=0; // second for loop - 1Billion iterations for (i=0;i<1000000000;i++) { ar[i1]=i; i1++; if (i1==1000) i1=0; } // measure time c2=clock(); // print results printf("Time befre: %d\n",c1); printf("Time after: %d\n",c2); c2-=c1; printf("Time taken in seconds: %f",(float)c2/1000); getchar(); }
-
toxcct wrote: C will be faster as it doesn't use the .NET framework Don't count it! The JIT compiler compiles code method-by-method on the first call only. After that, it doesn't need to be recompiled every time it's called. Plus, the JIT is very fast at it's job and has an advantage that the C++ compiler can never have. It knows at runtime exactly which processor its running on and it capabilities and can generate optimized native code specifically for that processor. This code has the possibility of exceeding the performance of the C++-based native code that is generated for a more general range of processors. Also, every C/C++ program out there has its own runtime also. It's just built into the app. Depending on what features your using in Windows, it also can have the MFC libraries. RageInTheMachine9532 "...a pungent, ghastly, stinky piece of cheese!" -- The Roaming Gnome
Here's my situation: I have written a routine in .NET which uses repeated interations of integer arrays approximately 1000x1000 elements... The total time taken in .NET is around 2s, which is slightly greater than acceptable (I have explored all avenues for code optimization in .NET).. I was hoping I'd be able to improve the time using C, and compiling the method into a dll.. Any suggestions?
-
I ran the following (same) 2 small programs in C#.NET and then in C (Bloodshed Dev C/C++). The latter took more time than the former (4.7s vs 3.8s) - I thought this was counter-intuitive, and that a code written in C should run faster... Any Ideas? C#: --- public static void Main() { // variables int i, i1=0; int[] ar=new int[1000]; long l1, l2; // first for loop - 1Billion iterations for (i=0;i<1000000000;i++) { ar[i1]=i; i1++; if (i1==1000) i1=0; } // measure time l1=DateTime.Now.Ticks; i1=0; // second for loop - 1Billion iterations for (i=0;i<1000000000;i++) { ar[i1]=i; i1++; if (i1==1000) i1=0; } // measure time l2=DateTime.Now.Ticks; // print results Console.WriteLine("Time befre: {0}",l1); Console.WriteLine("Time after: {0}",l2); l2-=l1; Console.WriteLine("Time taken in seconds: {0}",(double)l2/10000000); Console.ReadLine(); } C: -- #include main() { // variables int i, i1=0; int ar[1000]; clock_t c1, c2; // first for loop - 1Billion iterations for (i=0;i<1000000000;i++) { ar[i1]=i; i1++; if (i1==1000) i1=0; } // measure time c1=clock(); i1=0; // second for loop - 1Billion iterations for (i=0;i<1000000000;i++) { ar[i1]=i; i1++; if (i1==1000) i1=0; } // measure time c2=clock(); // print results printf("Time befre: %d\n",c1); printf("Time after: %d\n",c2); c2-=c1; printf("Time taken in seconds: %f",(float)c2/1000); getchar(); }
Here's my situation: I have written a routine in .NET which uses repeated interations of integer arrays approximately 1000x1000 elements... The total time taken in .NET is around 2s, which is slightly greater than acceptable (I have explored all avenues for code optimization in .NET).. I was hoping I'd be able to improve the time using C, and compiling the method into a dll.. Any suggestions?
-
toxcct wrote: C will be faster as it doesn't use the .NET framework Don't count it! The JIT compiler compiles code method-by-method on the first call only. After that, it doesn't need to be recompiled every time it's called. Plus, the JIT is very fast at it's job and has an advantage that the C++ compiler can never have. It knows at runtime exactly which processor its running on and it capabilities and can generate optimized native code specifically for that processor. This code has the possibility of exceeding the performance of the C++-based native code that is generated for a more general range of processors. Also, every C/C++ program out there has its own runtime also. It's just built into the app. Depending on what features your using in Windows, it also can have the MFC libraries. RageInTheMachine9532 "...a pungent, ghastly, stinky piece of cheese!" -- The Roaming Gnome
Dave Kreskowiak wrote: Plus, the JIT is very fast at it's job and has an advantage that the C++ compiler can never have. It knows at runtime exactly which processor its running on and it capabilities and can generate optimized native code specifically for that processor. This code has the possibility of exceeding the performance of the C++-based native code that is generated for a more general range of processors. right...except that i did never realize this in my 'real world' programs. are you able to proove this? can you give me any example where .NET is faster? i can give you a lot of code where c++ is faster. exactly the same algorithm, exactly the same implementation and c++ is at least 2 times faster. i'm not bothered by c# beeing slower at all, it has its advantages, too. but i just don't like everybody saying c# is not slower if it IS slower. plz show me an example. i'll change my mind and shut up as soon as i can see with my own eyes that c# is faster (or even equal) in general. daniel.
-
Here's my situation: I have written a routine in .NET which uses repeated interations of integer arrays approximately 1000x1000 elements... The total time taken in .NET is around 2s, which is slightly greater than acceptable (I have explored all avenues for code optimization in .NET).. I was hoping I'd be able to improve the time using C, and compiling the method into a dll.. Any suggestions?
-
Dave Kreskowiak wrote: Plus, the JIT is very fast at it's job and has an advantage that the C++ compiler can never have. It knows at runtime exactly which processor its running on and it capabilities and can generate optimized native code specifically for that processor. This code has the possibility of exceeding the performance of the C++-based native code that is generated for a more general range of processors. right...except that i did never realize this in my 'real world' programs. are you able to proove this? can you give me any example where .NET is faster? i can give you a lot of code where c++ is faster. exactly the same algorithm, exactly the same implementation and c++ is at least 2 times faster. i'm not bothered by c# beeing slower at all, it has its advantages, too. but i just don't like everybody saying c# is not slower if it IS slower. plz show me an example. i'll change my mind and shut up as soon as i can see with my own eyes that c# is faster (or even equal) in general. daniel.
I've got nothing to prove to you. All I said was that the JIT has the possibility of exceeding the performace of C++ based code. I, in no way, said that is was faster. RageInTheMachine9532 "...a pungent, ghastly, stinky piece of cheese!" -- The Roaming Gnome