Beginners question? Where does the exact double value go?
-
Hello, I have a strange problem, but probably it is because i am lacking knowledge, but what? When debugging this: double dTryThis = 1.23456789; the debuggers variable watch shows this value for dTryThis: 1.234567889999999 instead of the expected 1.234567890000000! :confused: And so is the wrong value used for further calculations! Is my computer sick or am I or what? I am on Visual Studio 2005 C++. Thanks a lot for your help! Martin...
-
Hello, I have a strange problem, but probably it is because i am lacking knowledge, but what? When debugging this: double dTryThis = 1.23456789; the debuggers variable watch shows this value for dTryThis: 1.234567889999999 instead of the expected 1.234567890000000! :confused: And so is the wrong value used for further calculations! Is my computer sick or am I or what? I am on Visual Studio 2005 C++. Thanks a lot for your help! Martin...
Floating point[^] numbers are represented as a fixed part (the mantissa) raised to an exponent. Both parts of the representation are necessarily binary numbers (i.e. every floating point number is the mantissa raised closest power of two ) As such, floating point numbers are always an approximation, with small values being closer together than large values (every time you increase the exponent, each bit in the mantissa becomes worth more). What you are seeing is normal behavior for floating point numbers (double or single precision). If you need a fixed precision, then you must use either integers (with a constant scaling factor),decimal[^] or currency, or some other exact numeric data type.
-
Floating point[^] numbers are represented as a fixed part (the mantissa) raised to an exponent. Both parts of the representation are necessarily binary numbers (i.e. every floating point number is the mantissa raised closest power of two ) As such, floating point numbers are always an approximation, with small values being closer together than large values (every time you increase the exponent, each bit in the mantissa becomes worth more). What you are seeing is normal behavior for floating point numbers (double or single precision). If you need a fixed precision, then you must use either integers (with a constant scaling factor),decimal[^] or currency, or some other exact numeric data type.
Base10 numbers can't store all fractional values exactly without infinite repitition (1/3, 1/7, etc). Your computer uses base2 math internally and only converts to base10 for output. base2 can't store all numbers without repeating either. The thing you need to remember is that different bases have different sets of repeaters. In base3 1/3 is written as 0.1 (zero ones and one third), while in base10 1/3 is 0.3333333...... The base2/base10 one that causes the most confusion is 1/10 is a repeater in base2. You can work it out by hand if you want to see it with your own eyes. For financial transactions fixed point datatypes or types that store the number in a base10 format directly are typically used, but they have significant penalty over native base2 math. When precision identical to base10 hand work is not needed as long as it's good enough, native floating point types are used and a comparisons are made in a way that reflects the rounding issues. Instead of testinf equality by
(f1-f2) == 0
you useAbs(f1-f2) < epsilon
where epsilon is a delta smaller than your allowed error tollerance. If you're doing very extensive computations it's sometimes neccesary to order them in a way that minimizes total error, but normal applications generally don't need to worry about this. for more detail see below. It's probably more detail than you need but is the best web reference I'm aware of. http://docs.sun.com/source/806-3568/ncg_goldberg.html[^]-- Rules of thumb should not be taken for the whole hand.
-
Floating point[^] numbers are represented as a fixed part (the mantissa) raised to an exponent. Both parts of the representation are necessarily binary numbers (i.e. every floating point number is the mantissa raised closest power of two ) As such, floating point numbers are always an approximation, with small values being closer together than large values (every time you increase the exponent, each bit in the mantissa becomes worth more). What you are seeing is normal behavior for floating point numbers (double or single precision). If you need a fixed precision, then you must use either integers (with a constant scaling factor),decimal[^] or currency, or some other exact numeric data type.
Aaaah! Trying six hours on that issue, then waiting five minutes for your excellent answer! Thank you very much! :-D Martin...
-
Aaaah! Trying six hours on that issue, then waiting five minutes for your excellent answer! Thank you very much! :-D Martin...
thanks: https://movied.org
-
Base10 numbers can't store all fractional values exactly without infinite repitition (1/3, 1/7, etc). Your computer uses base2 math internally and only converts to base10 for output. base2 can't store all numbers without repeating either. The thing you need to remember is that different bases have different sets of repeaters. In base3 1/3 is written as 0.1 (zero ones and one third), while in base10 1/3 is 0.3333333...... The base2/base10 one that causes the most confusion is 1/10 is a repeater in base2. You can work it out by hand if you want to see it with your own eyes. For financial transactions fixed point datatypes or types that store the number in a base10 format directly are typically used, but they have significant penalty over native base2 math. When precision identical to base10 hand work is not needed as long as it's good enough, native floating point types are used and a comparisons are made in a way that reflects the rounding issues. Instead of testinf equality by
(f1-f2) == 0
you useAbs(f1-f2) < epsilon
where epsilon is a delta smaller than your allowed error tollerance. If you're doing very extensive computations it's sometimes neccesary to order them in a way that minimizes total error, but normal applications generally don't need to worry about this. for more detail see below. It's probably more detail than you need but is the best web reference I'm aware of. http://docs.sun.com/source/806-3568/ncg_goldberg.html[^]-- Rules of thumb should not be taken for the whole hand.
thanks: https://movied.org
-
Floating point[^] numbers are represented as a fixed part (the mantissa) raised to an exponent. Both parts of the representation are necessarily binary numbers (i.e. every floating point number is the mantissa raised closest power of two ) As such, floating point numbers are always an approximation, with small values being closer together than large values (every time you increase the exponent, each bit in the mantissa becomes worth more). What you are seeing is normal behavior for floating point numbers (double or single precision). If you need a fixed precision, then you must use either integers (with a constant scaling factor),decimal[^] or currency, or some other exact numeric data type.
thanks: https://movied.org
-
Hello, I have a strange problem, but probably it is because i am lacking knowledge, but what? When debugging this: double dTryThis = 1.23456789; the debuggers variable watch shows this value for dTryThis: 1.234567889999999 instead of the expected 1.234567890000000! :confused: And so is the wrong value used for further calculations! Is my computer sick or am I or what? I am on Visual Studio 2005 C++. Thanks a lot for your help! Martin...
thanks: https://movied.org