Skip to content
  • Categories
  • Recent
  • Tags
  • Popular
  • World
  • Users
  • Groups
Skins
  • Light
  • Cerulean
  • Cosmo
  • Flatly
  • Journal
  • Litera
  • Lumen
  • Lux
  • Materia
  • Minty
  • Morph
  • Pulse
  • Sandstone
  • Simplex
  • Sketchy
  • Spacelab
  • United
  • Yeti
  • Zephyr
  • Dark
  • Cyborg
  • Darkly
  • Quartz
  • Slate
  • Solar
  • Superhero
  • Vapor

  • Default (No Skin)
  • No Skin
Collapse
Code Project
  1. Home
  2. General Programming
  3. C#
  4. Why is float to double conversion uniquely terrible?

Why is float to double conversion uniquely terrible?

Scheduled Pinned Locked Moved C#
questioncsharphelpannouncementlounge
12 Posts 4 Posters 0 Views 1 Watching
  • Oldest to Newest
  • Newest to Oldest
  • Most Votes
Reply
  • Reply as topic
Log in to reply
This topic has been deleted. Only users with topic management privileges can see it.
  • E Offline
    E Offline
    Eric Lynch
    wrote on last edited by
    #1

    UPDATE #2: After going back to my original test number, and decoding it (literally) bit-by-bit, in each of the formats, before and after conversion, I'm baffled. The float->double conversion does exactly what I would do. This makes it even more difficult to explain the outcome of my earlier test programs. The conversion itself seems accurate, but I'm clearly missing something. So, for now, I'm abandoning this post. I'll come back and update it when I answer my own question. Though, that will probably be after I write an article explaining the ridiculous trivia of exactly what C# does with each of the floating-point formats. After which, I've really got to get a life :) UPDATE #1: After further consideration, I reassert that there is something uniquely, and inexplicably, terrible about the float->double conversion! Some have suggested that this was something inherent with how floating point numbers are stored and not something uniquely terrible about float->double conversions. After a little bit of convincing, I concede that my original description did not exclude this possibility. While I intentionally chose a number with six decimal digits of precision (the limits of IEEE 754 binary32), perhaps unintentional bias led me to choose numbers that were particularly susceptible to this issue. So, to disprove my original premise, I wrote a new test program (included at the end of this post). This program generated random numbers with between one and six digits of precision. To avoid bias towards any one of the three floating-point formats, it calculates the "ideal" text from an integral value using only string manipulation to format it as floating-point. It then counts the number of times the ToString for the assigned values (decimal, double, and float) and converted values (decimal->double, decimal->float, double->decimal, double->float, float->decimal, and float->double) differ from this ideal value. After running the program for 1 million cycles, the results were as follows: decimal: 0 double: 0 float: 0 decimal->double: 0 decimal->float: 0 double->decimal: 0 double->float: 0 float->decimal: 0 float->double: 750741 I reassert that there is something uniquely, and inexplicably, terribl

    L P Richard DeemingR 3 Replies Last reply
    0
    • E Eric Lynch

      UPDATE #2: After going back to my original test number, and decoding it (literally) bit-by-bit, in each of the formats, before and after conversion, I'm baffled. The float->double conversion does exactly what I would do. This makes it even more difficult to explain the outcome of my earlier test programs. The conversion itself seems accurate, but I'm clearly missing something. So, for now, I'm abandoning this post. I'll come back and update it when I answer my own question. Though, that will probably be after I write an article explaining the ridiculous trivia of exactly what C# does with each of the floating-point formats. After which, I've really got to get a life :) UPDATE #1: After further consideration, I reassert that there is something uniquely, and inexplicably, terrible about the float->double conversion! Some have suggested that this was something inherent with how floating point numbers are stored and not something uniquely terrible about float->double conversions. After a little bit of convincing, I concede that my original description did not exclude this possibility. While I intentionally chose a number with six decimal digits of precision (the limits of IEEE 754 binary32), perhaps unintentional bias led me to choose numbers that were particularly susceptible to this issue. So, to disprove my original premise, I wrote a new test program (included at the end of this post). This program generated random numbers with between one and six digits of precision. To avoid bias towards any one of the three floating-point formats, it calculates the "ideal" text from an integral value using only string manipulation to format it as floating-point. It then counts the number of times the ToString for the assigned values (decimal, double, and float) and converted values (decimal->double, decimal->float, double->decimal, double->float, float->decimal, and float->double) differ from this ideal value. After running the program for 1 million cycles, the results were as follows: decimal: 0 double: 0 float: 0 decimal->double: 0 decimal->float: 0 double->decimal: 0 double->float: 0 float->decimal: 0 float->double: 750741 I reassert that there is something uniquely, and inexplicably, terribl

      L Offline
      L Offline
      Lost User
      wrote on last edited by
      #2

      Float (and Double) types are not directly convertible to Decimal due to the fact they are are held as binary values. See What Every Computer Scientist Should Know About Floating-Point Arithmetic[^] for the full explanation. Unless you specifically need to use floating point types (e.g for statistical analysis etc.) then you should stay well clear of them. For financial applications always use integer or decimal types.

      E 3 Replies Last reply
      0
      • E Eric Lynch

        UPDATE #2: After going back to my original test number, and decoding it (literally) bit-by-bit, in each of the formats, before and after conversion, I'm baffled. The float->double conversion does exactly what I would do. This makes it even more difficult to explain the outcome of my earlier test programs. The conversion itself seems accurate, but I'm clearly missing something. So, for now, I'm abandoning this post. I'll come back and update it when I answer my own question. Though, that will probably be after I write an article explaining the ridiculous trivia of exactly what C# does with each of the floating-point formats. After which, I've really got to get a life :) UPDATE #1: After further consideration, I reassert that there is something uniquely, and inexplicably, terrible about the float->double conversion! Some have suggested that this was something inherent with how floating point numbers are stored and not something uniquely terrible about float->double conversions. After a little bit of convincing, I concede that my original description did not exclude this possibility. While I intentionally chose a number with six decimal digits of precision (the limits of IEEE 754 binary32), perhaps unintentional bias led me to choose numbers that were particularly susceptible to this issue. So, to disprove my original premise, I wrote a new test program (included at the end of this post). This program generated random numbers with between one and six digits of precision. To avoid bias towards any one of the three floating-point formats, it calculates the "ideal" text from an integral value using only string manipulation to format it as floating-point. It then counts the number of times the ToString for the assigned values (decimal, double, and float) and converted values (decimal->double, decimal->float, double->decimal, double->float, float->decimal, and float->double) differ from this ideal value. After running the program for 1 million cycles, the results were as follows: decimal: 0 double: 0 float: 0 decimal->double: 0 decimal->float: 0 double->decimal: 0 double->float: 0 float->decimal: 0 float->double: 750741 I reassert that there is something uniquely, and inexplicably, terribl

        P Offline
        P Offline
        Peter_in_2780
        wrote on last edited by
        #3

        To add to Richard's comments, you might get a surprise if you print the values to more digits of precision. There are literally billions of distinct numbers which print as 123.456 to 3 decimal places.

        Software rusts. Simon Stephenson, ca 1994. So does this signature. me, 2012

        E 2 Replies Last reply
        0
        • L Lost User

          Float (and Double) types are not directly convertible to Decimal due to the fact they are are held as binary values. See What Every Computer Scientist Should Know About Floating-Point Arithmetic[^] for the full explanation. Unless you specifically need to use floating point types (e.g for statistical analysis etc.) then you should stay well clear of them. For financial applications always use integer or decimal types.

          E Offline
          E Offline
          Eric Lynch
          wrote on last edited by
          #4

          UPDATE: Seems I'm simply wrong on my point below. I was misled by some less than clear wording in one part of the specification. That said, I still find it odd that float->double is the only one of the six possible C# floating-point conversions that is consistently less accurate in its apparent result. I tried a bunch of different values with this same consistent outcome. Something seems wrong to me. I do understand your point. It is a great generalized warning for those unwilling to learn the specifics of the precise floating-point data type they are using. If I were using something other than an IEEE 754 binary32 data type (float), or using more than six significant decimal digits, I would completely agree with you. However, since neither of these are the case here, I respectfully and completely disagree. The behavior is simply inconsistent with the IEEE 754 specification. The specification of binary32 provides 23 explicit bits for the significand. This provides accuracy for a minimum of six significant decimal digits. The specification explicitly details that the representation of this number of significant decimal digits (six or less) must be exactly accurate. In the case of binary64, which has 52 explicit bits for the significand, the specification requires accuracy to a minimum of 15 significant decimal digits.

          L 1 Reply Last reply
          0
          • P Peter_in_2780

            To add to Richard's comments, you might get a surprise if you print the values to more digits of precision. There are literally billions of distinct numbers which print as 123.456 to 3 decimal places.

            Software rusts. Simon Stephenson, ca 1994. So does this signature. me, 2012

            E Offline
            E Offline
            Eric Lynch
            wrote on last edited by
            #5

            Based on my misinterpretation of the IEEE 754 specification, which I foolishly shared in a response to Richard, it seems today is a day full of surprises for me :) That said, I would not expect converting from IEEE 754 binary32 to binary64 would make things apparently worse. I tried a bunch of different values. Of the six possible floating point conversions this one consistently yields the worst apparent outcome. If anything, I would have expected converting binary64 to binary32 to have the worst apparent outcome.

            1 Reply Last reply
            0
            • E Eric Lynch

              UPDATE: Seems I'm simply wrong on my point below. I was misled by some less than clear wording in one part of the specification. That said, I still find it odd that float->double is the only one of the six possible C# floating-point conversions that is consistently less accurate in its apparent result. I tried a bunch of different values with this same consistent outcome. Something seems wrong to me. I do understand your point. It is a great generalized warning for those unwilling to learn the specifics of the precise floating-point data type they are using. If I were using something other than an IEEE 754 binary32 data type (float), or using more than six significant decimal digits, I would completely agree with you. However, since neither of these are the case here, I respectfully and completely disagree. The behavior is simply inconsistent with the IEEE 754 specification. The specification of binary32 provides 23 explicit bits for the significand. This provides accuracy for a minimum of six significant decimal digits. The specification explicitly details that the representation of this number of significant decimal digits (six or less) must be exactly accurate. In the case of binary64, which has 52 explicit bits for the significand, the specification requires accuracy to a minimum of 15 significant decimal digits.

              L Offline
              L Offline
              Lost User
              wrote on last edited by
              #6

              It depends on the number you start with. Since accuracy is not guaranteed with floats and doubles, you can get inconsistencies, because the number will often be an approximation. Floating point's strength is (was) its ability to represent very large or very small numbers with reasonable, but not absolute, accuracy. For most business applications it should, as I suggested earlier, be avoided like the plague.

              Eric Lynch wrote:

              those unwilling to learn the specifics of the precise floating-point data type they are using.

              Or in many cases (see QA) those who are still being taught to use it.

              E 1 Reply Last reply
              0
              • L Lost User

                It depends on the number you start with. Since accuracy is not guaranteed with floats and doubles, you can get inconsistencies, because the number will often be an approximation. Floating point's strength is (was) its ability to represent very large or very small numbers with reasonable, but not absolute, accuracy. For most business applications it should, as I suggested earlier, be avoided like the plague.

                Eric Lynch wrote:

                those unwilling to learn the specifics of the precise floating-point data type they are using.

                Or in many cases (see QA) those who are still being taught to use it.

                E Offline
                E Offline
                Eric Lynch
                wrote on last edited by
                #7

                Yeah, we cross-posted. I had already retracted my...I'll call it "point"? Regrettably, I misinterpreted the IEEE 754 specification. I can only wish that I realized my mistake before posting :) That said, I still suspect there is something less than ideal ocurring with the float->double conversion. I'm working on some code to more rigorously explore the issue. I'll either prove or disprove my suspicions. I'll come back later and post whatever I find.

                L 1 Reply Last reply
                0
                • E Eric Lynch

                  Yeah, we cross-posted. I had already retracted my...I'll call it "point"? Regrettably, I misinterpreted the IEEE 754 specification. I can only wish that I realized my mistake before posting :) That said, I still suspect there is something less than ideal ocurring with the float->double conversion. I'm working on some code to more rigorously explore the issue. I'll either prove or disprove my suspicions. I'll come back later and post whatever I find.

                  L Offline
                  L Offline
                  Lost User
                  wrote on last edited by
                  #8

                  I just tried something similar in C++ and it produces the correct results. Now I am :confused:

                  1 Reply Last reply
                  0
                  • L Lost User

                    Float (and Double) types are not directly convertible to Decimal due to the fact they are are held as binary values. See What Every Computer Scientist Should Know About Floating-Point Arithmetic[^] for the full explanation. Unless you specifically need to use floating point types (e.g for statistical analysis etc.) then you should stay well clear of them. For financial applications always use integer or decimal types.

                    E Offline
                    E Offline
                    Eric Lynch
                    wrote on last edited by
                    #9

                    I finished my new tests and updated my original post to include the results. I stand by my original premise. There is something uniquely, and inexplicably, terrible about float->double conversions. The test randomly generated 1 million numbers with between one and six digits of precision. Of the six possible floating-point conversions, only the ToString for float->double ever differed from the "expected" text. It did so a whopping 750,741 times. I beleive this is what they call "statistically significant" :)

                    1 Reply Last reply
                    0
                    • P Peter_in_2780

                      To add to Richard's comments, you might get a surprise if you print the values to more digits of precision. There are literally billions of distinct numbers which print as 123.456 to 3 decimal places.

                      Software rusts. Simon Stephenson, ca 1994. So does this signature. me, 2012

                      E Offline
                      E Offline
                      Eric Lynch
                      wrote on last edited by
                      #10

                      I conducted some more rigorous experiments. This explanation doesn't match the experimental evidence. I now feel somewhat certain, based on evidence from a million randomly generated numbers, that there is something uniquely, and inexplicably, terrible about float->double conversions. I qualify it with "somewhat", because I refuse to be completely wrong, about the same thing, twice in a single day :)

                      1 Reply Last reply
                      0
                      • L Lost User

                        Float (and Double) types are not directly convertible to Decimal due to the fact they are are held as binary values. See What Every Computer Scientist Should Know About Floating-Point Arithmetic[^] for the full explanation. Unless you specifically need to use floating point types (e.g for statistical analysis etc.) then you should stay well clear of them. For financial applications always use integer or decimal types.

                        E Offline
                        E Offline
                        Eric Lynch
                        wrote on last edited by
                        #11

                        OK, this is sadly nearing obsessional :( I've gone through the effort of decoding every dang bit in the IEEE 754 formats. Near as I can tell, the float->double conversion is doing absolutely what I would expect of it. Though, it is still yielding a result that has the appearance of being worse. I'm currently baffled...maybe double.ToString() is the culprit? Maybe its something else entirely? I'm going to give this a whole lot more thought tomorrow...at a decent hour. Though, since it has no practical impact on anything I'm actually doing, I should probably let it go. Regrettably, intellectual curiosity has a firm hold of me at this point :) Below you'll find the output from my latest program, where I enter the text "123.456". "Single" / "Double" are conversions from the results of decimal.Parse. "Single (direct)" / "Double (direct)" are the results of float.Parse/double.Parse. The remainder are the indicated conversions of the results of a float.Parse.

                        Single: Sign=0, Exponent=6 (10000101), Significand=7793017 (11101101110100101111001)
                        123.456
                        Single (direct): Sign=0, Exponent=6 (10000101), Significand=7793017 (11101101110100101111001)
                        123.456
                        Double: Sign=0, Exponent=6 (10000000101), Significand=4183844053827191 (1110110111010010111100011010100111111011111001110111)
                        123.456
                        Double (direct): Sign=0, Exponent=6 (10000000101), Significand=4183844144021504 (1110110111010010111100100000000000000000000000000000)
                        123.456001281738
                        float->double: Sign=0, Exponent=6 (10000000101), Significand=4183844144021504 (1110110111010010111100100000000000000000000000000000)
                        123.456001281738
                        float->decimal->double: Sign=0, Exponent=6 (10000000101), Significand=4183844053827191 (1110110111010010111100011010100111111011111001110111)
                        123.456

                        After this much effort, I guess I'll eventually be forced to write an article on every useless bit of trivia I can find about all of these formats :)

                        1 Reply Last reply
                        0
                        • E Eric Lynch

                          UPDATE #2: After going back to my original test number, and decoding it (literally) bit-by-bit, in each of the formats, before and after conversion, I'm baffled. The float->double conversion does exactly what I would do. This makes it even more difficult to explain the outcome of my earlier test programs. The conversion itself seems accurate, but I'm clearly missing something. So, for now, I'm abandoning this post. I'll come back and update it when I answer my own question. Though, that will probably be after I write an article explaining the ridiculous trivia of exactly what C# does with each of the floating-point formats. After which, I've really got to get a life :) UPDATE #1: After further consideration, I reassert that there is something uniquely, and inexplicably, terrible about the float->double conversion! Some have suggested that this was something inherent with how floating point numbers are stored and not something uniquely terrible about float->double conversions. After a little bit of convincing, I concede that my original description did not exclude this possibility. While I intentionally chose a number with six decimal digits of precision (the limits of IEEE 754 binary32), perhaps unintentional bias led me to choose numbers that were particularly susceptible to this issue. So, to disprove my original premise, I wrote a new test program (included at the end of this post). This program generated random numbers with between one and six digits of precision. To avoid bias towards any one of the three floating-point formats, it calculates the "ideal" text from an integral value using only string manipulation to format it as floating-point. It then counts the number of times the ToString for the assigned values (decimal, double, and float) and converted values (decimal->double, decimal->float, double->decimal, double->float, float->decimal, and float->double) differ from this ideal value. After running the program for 1 million cycles, the results were as follows: decimal: 0 double: 0 float: 0 decimal->double: 0 decimal->float: 0 double->decimal: 0 double->float: 0 float->decimal: 0 float->double: 750741 I reassert that there is something uniquely, and inexplicably, terribl

                          Richard DeemingR Offline
                          Richard DeemingR Offline
                          Richard Deeming
                          wrote on last edited by
                          #12

                          This discussion from 2011 looks relevant:

                          c# - Convert float to double loses precision but not via ToString - Stack Overflow[^]:

                          What you get in the more precise representation (past a certain point) is just garbage. If you were to cast it back to a float FROM a double, you would have the exact same precision as you did before.


                          "These people looked deep within my soul and assigned me a number based on the order in which I joined." - Homer

                          "These people looked deep within my soul and assigned me a number based on the order in which I joined" - Homer

                          1 Reply Last reply
                          0
                          Reply
                          • Reply as topic
                          Log in to reply
                          • Oldest to Newest
                          • Newest to Oldest
                          • Most Votes


                          • Login

                          • Don't have an account? Register

                          • Login or register to search.
                          • First post
                            Last post
                          0
                          • Categories
                          • Recent
                          • Tags
                          • Popular
                          • World
                          • Users
                          • Groups