Floating Point Calculations - c# vs. c++
-
Dear experts I'm migrating some stuff from c++ (Borland Builder) to c#. While comparing the calculations between the two implementations I found among others a.) c++: pow(0.1, 3.0) results in 0.001 b.) c#: Math.Pow(0.1, 3.0) results in 0.0010000000000000002 Ok so far so good. "power" is not an fpu instruction and therefore I can imagine that the two results are slightly different, because of maybe different implementation of the "power" function. Nevertheless I made the following experiment c++: double test= 0.1 * 0.1 * 0.1 which results in 0.001 c#: double test= 0.1 * 0.1 * 0.1 which results again in 0.0010000000000000002 Surprising for me The latter now is something surprising -at least for me- because I'm assuming that both (c++ and c#) will use the fpu for multiplications. I googled a lot for this, but I'm not able to find an explanation. Do you have an idea? Thank you very much in advance for your comments on this. Regards [edit] Btw: I also tried
_clearfp
and_fpreset
for c#, but results remain the same.It does not solve my Problem, but it answers my question
-
Dear experts I'm migrating some stuff from c++ (Borland Builder) to c#. While comparing the calculations between the two implementations I found among others a.) c++: pow(0.1, 3.0) results in 0.001 b.) c#: Math.Pow(0.1, 3.0) results in 0.0010000000000000002 Ok so far so good. "power" is not an fpu instruction and therefore I can imagine that the two results are slightly different, because of maybe different implementation of the "power" function. Nevertheless I made the following experiment c++: double test= 0.1 * 0.1 * 0.1 which results in 0.001 c#: double test= 0.1 * 0.1 * 0.1 which results again in 0.0010000000000000002 Surprising for me The latter now is something surprising -at least for me- because I'm assuming that both (c++ and c#) will use the fpu for multiplications. I googled a lot for this, but I'm not able to find an explanation. Do you have an idea? Thank you very much in advance for your comments on this. Regards [edit] Btw: I also tried
_clearfp
and_fpreset
for c#, but results remain the same.It does not solve my Problem, but it answers my question
The default precision to which doubles are printed is different in C++ and C#, so they might actually be the same, can't tell from this. They might also be different, depending on some funny combination of one program being 32bit (and defaulting to the x87 FPU and 80bit computation followed by a conversion to 64bit) and the other 64bit (and defaulting to SSE2 and 64bit computation), I'm not sure if that actually makes a difference here.. If you reinterpret the bits as an int64 and print that we could be sure.
-
The default precision to which doubles are printed is different in C++ and C#, so they might actually be the same, can't tell from this. They might also be different, depending on some funny combination of one program being 32bit (and defaulting to the x87 FPU and 80bit computation followed by a conversion to 64bit) and the other 64bit (and defaulting to SSE2 and 64bit computation), I'm not sure if that actually makes a difference here.. If you reinterpret the bits as an int64 and print that we could be sure.
:thumbsup: :thumbsup: :thumbsup: :thumbsup: Thank you very much! You are right, having a look at both results as int64 shows, that both results are exactly the same. Thank a lot again! Regards [Edit] :-O More and more I recognize I'm becoming older... To my shame, I must confess that the above described observations were made only by observing the debugger display. I think ten years ago I would also have come up with the idea to compare the results binary
It does not solve my Problem, but it answers my question
-
:thumbsup: :thumbsup: :thumbsup: :thumbsup: Thank you very much! You are right, having a look at both results as int64 shows, that both results are exactly the same. Thank a lot again! Regards [Edit] :-O More and more I recognize I'm becoming older... To my shame, I must confess that the above described observations were made only by observing the debugger display. I think ten years ago I would also have come up with the idea to compare the results binary
It does not solve my Problem, but it answers my question
-
0x01AA wrote:
I think ten years ago I would also have come up with the idea to compare the results binary
Also, the neatness the
C++
result would have made a ring bell[^]. :laugh: -
Dear experts I'm migrating some stuff from c++ (Borland Builder) to c#. While comparing the calculations between the two implementations I found among others a.) c++: pow(0.1, 3.0) results in 0.001 b.) c#: Math.Pow(0.1, 3.0) results in 0.0010000000000000002 Ok so far so good. "power" is not an fpu instruction and therefore I can imagine that the two results are slightly different, because of maybe different implementation of the "power" function. Nevertheless I made the following experiment c++: double test= 0.1 * 0.1 * 0.1 which results in 0.001 c#: double test= 0.1 * 0.1 * 0.1 which results again in 0.0010000000000000002 Surprising for me The latter now is something surprising -at least for me- because I'm assuming that both (c++ and c#) will use the fpu for multiplications. I googled a lot for this, but I'm not able to find an explanation. Do you have an idea? Thank you very much in advance for your comments on this. Regards [edit] Btw: I also tried
_clearfp
and_fpreset
for c#, but results remain the same.It does not solve my Problem, but it answers my question
Back in the day, I wrote a method called
AlmostEqual
to compare floating point values to a specified precision. I've found that as long as you don't do math on the value, it will remain as it was set, so given the following:double x = 0.01d;
double y = 0.01d;The expression
if (x == y)
will always evaluate to true. However, if you to this:x = x * 1.0d
the expression above will evaluate to false.
".45 ACP - because shooting twice is just silly" - JSOP, 2010
-----
You can never have too much ammo - unless you're swimming, or on fire. - JSOP, 2010
-----
When you pry the gun from my cold dead hands, be careful - the barrel will be very hot. - JSOP, 2013 -
Back in the day, I wrote a method called
AlmostEqual
to compare floating point values to a specified precision. I've found that as long as you don't do math on the value, it will remain as it was set, so given the following:double x = 0.01d;
double y = 0.01d;The expression
if (x == y)
will always evaluate to true. However, if you to this:x = x * 1.0d
the expression above will evaluate to false.
".45 ACP - because shooting twice is just silly" - JSOP, 2010
-----
You can never have too much ammo - unless you're swimming, or on fire. - JSOP, 2010
-----
When you pry the gun from my cold dead hands, be careful - the barrel will be very hot. - JSOP, 2013 -
:thumbsup: :thumbsup: :thumbsup: :thumbsup: Thank you very much! You are right, having a look at both results as int64 shows, that both results are exactly the same. Thank a lot again! Regards [Edit] :-O More and more I recognize I'm becoming older... To my shame, I must confess that the above described observations were made only by observing the debugger display. I think ten years ago I would also have come up with the idea to compare the results binary
It does not solve my Problem, but it answers my question
Makes me think of when I was playing around with APL: No distinction between int and float, no explicit declaration of type. All numerics were 64 bit FP. To avoid problems with values that conceptually are the same, but due to limited precision may not be (such as 1/3 + 1/3 + 1/3 not necessarily being identical to 1.0), there was a user settable tolerance variable - if my memory is right, it was called quadFUZZ): In any comparing of numerics, if the difference was less that FUZZ, the values were treated as exactly equal. I believe something similar also exist in other highly environments, such as Smalltalk.) I was teaching C++ programming for a few years. Any hand-ins where float variables were compared by == to constants or othter varibles did not pass. My teaching was to consider == an invalid operator for floats - use >, >=, <, <=. And where appropriate, code the APL style in longhand: if (abs(f1-f2) < fuzz) { ... treat as exactly equal. Of course some students objected: "Why can't we just ...", and I had to explain over and over. One student went as far to hand in a homework which stared with a big block comment, headed by: "This is how REAL PROGRAMMERS would code the solution: ..." and after som real dirty code, at the end of the comment block: "But this is how our professor forces us to do it: " - and then some clean and readble code, not commented out. So he knew how to behave in a disciplined way, but refused do give up his undisciplined behaviour completely. :-)
-
Makes me think of when I was playing around with APL: No distinction between int and float, no explicit declaration of type. All numerics were 64 bit FP. To avoid problems with values that conceptually are the same, but due to limited precision may not be (such as 1/3 + 1/3 + 1/3 not necessarily being identical to 1.0), there was a user settable tolerance variable - if my memory is right, it was called quadFUZZ): In any comparing of numerics, if the difference was less that FUZZ, the values were treated as exactly equal. I believe something similar also exist in other highly environments, such as Smalltalk.) I was teaching C++ programming for a few years. Any hand-ins where float variables were compared by == to constants or othter varibles did not pass. My teaching was to consider == an invalid operator for floats - use >, >=, <, <=. And where appropriate, code the APL style in longhand: if (abs(f1-f2) < fuzz) { ... treat as exactly equal. Of course some students objected: "Why can't we just ...", and I had to explain over and over. One student went as far to hand in a homework which stared with a big block comment, headed by: "This is how REAL PROGRAMMERS would code the solution: ..." and after som real dirty code, at the end of the comment block: "But this is how our professor forces us to do it: " - and then some clean and readble code, not commented out. So he knew how to behave in a disciplined way, but refused do give up his undisciplined behaviour completely. :-)
Thank you for your reply. But I was never trying to do something like a==b, I was comparing the two implementations. Btw: I would never do
a==b
for double values, instead I do!(ab)
:) And no, of course I also don't do that :laugh:It does not solve my Problem, but it answers my question