Well, I have a question there. Maybe I am getting something wrong. I agree these statements hold true if you force the result into a temporary variable, e.g.
double x1 = x + x;
double x2 = x - x;
then x1 + x2 = Infinity + 0 = Infinity
(or float, alternatively). However, as long as the statement is not forced into a double
, the value is computed with 80bit precision, thereby correctly evaluating to 0.0 in all cases given?! From the little I remember from assembly, you will push values onto the (80bit) floating-point stack using fld
, then perform floating point operations such as fadd
, fsub
, fmul
(which expects its arguments at positions 0, 1 of the fp-stack and moves its results to st0), and pop the result off the stack using fstp
or fistp
for integer-targets. This raises the interesting question of debug vs. release: If the compiler optimized floating-point code, it would allow for more precision by holding temporary results in the fp-stack. While this would increase precision, it'd also cause inconsistent behaviour. I haven't encountered the latter so it probably doesn't, but it might be worth checking. EDIT: I just found this interesting post on the net, addressing exactly these issues: http://blogs.msdn.com/davidnotario/archive/2005/08/08/449092.aspx[^]
"Obstacles are those frightening things you see when you take your Eyes off your aim" - Henry Ford
Articles Blog