Precision in doubles
-
I suspect everyone has run unti this, but I just ran into it. double d1 = 1; When I look at d1 in the debugger, it will show 1.0000001342, or something similar. Now, that is close to 1, but it's not the same as 1. d1 - 1 is then equal to .0000001342. And .0000001342 isn't zero. Suggestions as to how to handle this? How do I get 1.0000000000 rather than 1.0000001342? I know I can use an int for 1, but the same thing occurs with: double d2 = 2.354 debugger shows d2 as 2.3540000154. Thanks
-
I suspect everyone has run unti this, but I just ran into it. double d1 = 1; When I look at d1 in the debugger, it will show 1.0000001342, or something similar. Now, that is close to 1, but it's not the same as 1. d1 - 1 is then equal to .0000001342. And .0000001342 isn't zero. Suggestions as to how to handle this? How do I get 1.0000000000 rather than 1.0000001342? I know I can use an int for 1, but the same thing occurs with: double d2 = 2.354 debugger shows d2 as 2.3540000154. Thanks
I can't reproduce this on VC++ 2003 .NET SP1
-
I suspect everyone has run unti this, but I just ran into it. double d1 = 1; When I look at d1 in the debugger, it will show 1.0000001342, or something similar. Now, that is close to 1, but it's not the same as 1. d1 - 1 is then equal to .0000001342. And .0000001342 isn't zero. Suggestions as to how to handle this? How do I get 1.0000000000 rather than 1.0000001342? I know I can use an int for 1, but the same thing occurs with: double d2 = 2.354 debugger shows d2 as 2.3540000154. Thanks
It is inherant in how floating point works that many numbers cannot be precisely represented with a finite number of bits. Just as in base 10, a number such as 1/3 needs an infinite number of digits to be written out (0.33333.....) Take, for example, the number 1/10. Suppose you write
double d1 = 0.1
- internally this decimal is converted into binary which (simplfying somewhat, floating point numbers are actually stored in the binary version of scientific notation) is 0.0001100110011001100... where the bits to the right of the point at 1/2, 1/4, 1/8, 1/16, 1/32, etc. Since any actual data type has a finite number of bits (in the case of double precision floating point, 53 bits are available from the first "1") we can only store an apporximation of many values. When using floating point data, you should never test numbers for equality (unless you're certain that the numbers need to be the same sequence of bits) you should instead check if the difference between two numbers is sufficently small. -
It is inherant in how floating point works that many numbers cannot be precisely represented with a finite number of bits. Just as in base 10, a number such as 1/3 needs an infinite number of digits to be written out (0.33333.....) Take, for example, the number 1/10. Suppose you write
double d1 = 0.1
- internally this decimal is converted into binary which (simplfying somewhat, floating point numbers are actually stored in the binary version of scientific notation) is 0.0001100110011001100... where the bits to the right of the point at 1/2, 1/4, 1/8, 1/16, 1/32, etc. Since any actual data type has a finite number of bits (in the case of double precision floating point, 53 bits are available from the first "1") we can only store an apporximation of many values. When using floating point data, you should never test numbers for equality (unless you're certain that the numbers need to be the same sequence of bits) you should instead check if the difference between two numbers is sufficently small.AJR_UK wrote:
It is inherant in how floating point works that many numbers cannot be precisely represented with a finite number of bits
So you're saying 1 can't be precisely represented in a double? :)
-
AJR_UK wrote:
It is inherant in how floating point works that many numbers cannot be precisely represented with a finite number of bits
So you're saying 1 can't be precisely represented in a double? :)
Bah, smart-alec. I only said "many numbers", I didn't say anything about which numbers, other than the example I gave. 1 (or any other integer until you get big enough that precision becomes an issue) can, of course, be precisely represented in a double. I dunno why Oliver was having problems when using the value 1, but I was addressing hte general case. That's my story, and I'm sticking to it ;P
-
Bah, smart-alec. I only said "many numbers", I didn't say anything about which numbers, other than the example I gave. 1 (or any other integer until you get big enough that precision becomes an issue) can, of course, be precisely represented in a double. I dunno why Oliver was having problems when using the value 1, but I was addressing hte general case. That's my story, and I'm sticking to it ;P
AJR_UK wrote:
Bah, smart-alec
Sorry :) Actually your post IMO was excellent and between you and Oliver you had me a bit worried :laugh:. I actually debugged a few examples and made sure! Cheers!
-
I suspect everyone has run unti this, but I just ran into it. double d1 = 1; When I look at d1 in the debugger, it will show 1.0000001342, or something similar. Now, that is close to 1, but it's not the same as 1. d1 - 1 is then equal to .0000001342. And .0000001342 isn't zero. Suggestions as to how to handle this? How do I get 1.0000000000 rather than 1.0000001342? I know I can use an int for 1, but the same thing occurs with: double d2 = 2.354 debugger shows d2 as 2.3540000154. Thanks