How to get a correct string representation of double?
-
Hi, I've a problem with the proper conversion of double to it's string representation. For example. If I've in the code some double value: double a = 69000.015; the debugger in debug window will show 69000.014999999999 but not 69000.015 Generally I need the precision and number of significant digits for the conversion. For sprintf(...) I have to specify the precision and specifier. How can I get the correct precision of the double value? Is there any solution/clases for this type of conversion? Thanks.
-
Hi, I've a problem with the proper conversion of double to it's string representation. For example. If I've in the code some double value: double a = 69000.015; the debugger in debug window will show 69000.014999999999 but not 69000.015 Generally I need the precision and number of significant digits for the conversion. For sprintf(...) I have to specify the precision and specifier. How can I get the correct precision of the double value? Is there any solution/clases for this type of conversion? Thanks.
Hi, there is simply no solution to your problem. Floating point numbers, by their very nature, cannot always represent the intended value. The simplest example is the outcome of 1.0/3.0 humans write it down as 0.333333..., computers perform the division in binary and get a reasonably accurate quotient (say off by no more than 1 of the lowest bit position), which is after all an approximation. The binary approximation of 1/3 is something like 1/4 + 1/16 + 1/64 + 1/256 + ... and it has to stop somewhere since there are only so many bits reserved for the mantissa. Similar things will happen to almost all real numbers; in order to avoid it, the number must happen to be an integer value possibly divided by a power of two. Hence 1.0/4.0, 7.0/4.0, 23.0/256.0 etc can be represented exactly, whereas numbers with a prime factor (other than 2) in the denominator will not be exact, nor will irrational numbers (such as pi, or the square root of 2). If you know how many decimals (i.e. digits behind the decimal point) are required to get an exact representation, then order that number of decimals or fewer. Rounding will occur, and everything will look very natural. If you don't know the number of decimals required and care very much about the correctness of them, say you start your own financial program or business, then you'd better have a look at the
decimal
type. It offers a smaller range of numbers, but in some sense a better accuracy. :)Luc Pattyn [Forum Guidelines] [My Articles]
Fixturized forever. :confused:
-
Hi, I've a problem with the proper conversion of double to it's string representation. For example. If I've in the code some double value: double a = 69000.015; the debugger in debug window will show 69000.014999999999 but not 69000.015 Generally I need the precision and number of significant digits for the conversion. For sprintf(...) I have to specify the precision and specifier. How can I get the correct precision of the double value? Is there any solution/clases for this type of conversion? Thanks.
No real simple solution due to the way floating point numbers are represented in binary form. You'll always have some odd round off to deal with. You could always try using the Decimal type...
"The clue train passed his station without stopping." - John Simmons / outlaw programmer "Real programmers just throw a bunch of 1s and 0s at the computer to see what sticks" - Pete O'Hanlon
-
Hi, I've a problem with the proper conversion of double to it's string representation. For example. If I've in the code some double value: double a = 69000.015; the debugger in debug window will show 69000.014999999999 but not 69000.015 Generally I need the precision and number of significant digits for the conversion. For sprintf(...) I have to specify the precision and specifier. How can I get the correct precision of the double value? Is there any solution/clases for this type of conversion? Thanks.
-
Hi, there is simply no solution to your problem. Floating point numbers, by their very nature, cannot always represent the intended value. The simplest example is the outcome of 1.0/3.0 humans write it down as 0.333333..., computers perform the division in binary and get a reasonably accurate quotient (say off by no more than 1 of the lowest bit position), which is after all an approximation. The binary approximation of 1/3 is something like 1/4 + 1/16 + 1/64 + 1/256 + ... and it has to stop somewhere since there are only so many bits reserved for the mantissa. Similar things will happen to almost all real numbers; in order to avoid it, the number must happen to be an integer value possibly divided by a power of two. Hence 1.0/4.0, 7.0/4.0, 23.0/256.0 etc can be represented exactly, whereas numbers with a prime factor (other than 2) in the denominator will not be exact, nor will irrational numbers (such as pi, or the square root of 2). If you know how many decimals (i.e. digits behind the decimal point) are required to get an exact representation, then order that number of decimals or fewer. Rounding will occur, and everything will look very natural. If you don't know the number of decimals required and care very much about the correctness of them, say you start your own financial program or business, then you'd better have a look at the
decimal
type. It offers a smaller range of numbers, but in some sense a better accuracy. :)Luc Pattyn [Forum Guidelines] [My Articles]
Fixturized forever. :confused:
Thanks Luc for the answer, but unfortunally I'm not a Fortran developer, I do Visual C++ which do not have a decimal type. But the 69000.015 is non preiodical double(not like 1/3=0.333333...) and internal CPU's data register representation will be a particular correspondent binary. I'm not a pro in math, but for me seems like it needs some analysis of the binary representation to get a precision.
-
Thanks Luc for the answer, but unfortunally I'm not a Fortran developer, I do Visual C++ which do not have a decimal type. But the 69000.015 is non preiodical double(not like 1/3=0.333333...) and internal CPU's data register representation will be a particular correspondent binary. I'm not a pro in math, but for me seems like it needs some analysis of the binary representation to get a precision.
This page [^]may show the internal representation of a decimal number as double. It comes out, that
69000.015
is represented with40F0D8803D70A3D
, i.e.1.0528566741943360
as significand and16
as exponent. If you put the above number in windows calculator then you'll get:1.0528566741943360 * 2^16 = 1,0528566741943360 * 65536 = 69000.0150000000040960
probably not exactly what is expected by you. :)
If the Lord God Almighty had consulted me before embarking upon the Creation, I would have recommended something simpler. -- Alfonso the Wise, 13th Century King of Castile.
This is going on my arrogant assumptions. You may have a superb reason why I'm completely wrong. -- Iain Clarke
[My articles] -
No real simple solution due to the way floating point numbers are represented in binary form. You'll always have some odd round off to deal with. You could always try using the Decimal type...
"The clue train passed his station without stopping." - John Simmons / outlaw programmer "Real programmers just throw a bunch of 1s and 0s at the computer to see what sticks" - Pete O'Hanlon
You won't always have round off to deal with. Binary fractions like 1/2, 1/4, 3/4, 1/8, etc. can be exactly represented as a double. But you're right that most numbers can't (assuming that the word "most" can be used of an infinite set). The reason we put up with the inexactness is that exactness doesn't exist in the physical world. If a piece of wood is "2.4 meters long", it's really "2.4 meters give or take a millimeter" (which would be written as "2.400" to emphasize this). So, then, what difference does it make if your computer represents the length as 2.399999999999999911182158029987476766109466552734375 m, off by a mere 8.88e-17 m, when the physical accuracy is nowhere near that good? The one exception is money. If something costs $3.99, it costs exactly $3.99, not $3.9900000000000002. If the sales tax is 8.25%, it's exactly 8.25%, not 8.2500000000000004%. And, because we use decimal currency,
Decimal
classes are often used for monetary amounts. You could use it as a general-purpose number class, but: (1) It's not a panacea for rounding error. You still have to deal with 1/3 + 2/3 != 1. (2) Due to the lack of hardware support, it's much slower than binary floats. Therefore, I wouldn't just blindly recommend "useDecimal
". If your only complaint withfloat
ordouble
is the string representation, then just use a smaller precision insprintf
orstring.Format
or the equivalent in your favorite language. (15 digits in the 'g' format will usually get the job done.) -
Hi, I've a problem with the proper conversion of double to it's string representation. For example. If I've in the code some double value: double a = 69000.015; the debugger in debug window will show 69000.014999999999 but not 69000.015 Generally I need the precision and number of significant digits for the conversion. For sprintf(...) I have to specify the precision and specifier. How can I get the correct precision of the double value? Is there any solution/clases for this type of conversion? Thanks.
Hi, I have a string with value "111733394601234567094987654321" . Now i want to divide this with 636. I convert it to double , then im getting 1.1173339460123456E+29 . but these both values are differnt finally im getting wrong output. can anyone help me..... Thanks Prathap
-
Hi, I have a string with value "111733394601234567094987654321" . Now i want to divide this with 636. I convert it to double , then im getting 1.1173339460123456E+29 . but these both values are differnt finally im getting wrong output. can anyone help me..... Thanks Prathap