Skip to content
  • Categories
  • Recent
  • Tags
  • Popular
  • World
  • Users
  • Groups
Skins
  • Light
  • Cerulean
  • Cosmo
  • Flatly
  • Journal
  • Litera
  • Lumen
  • Lux
  • Materia
  • Minty
  • Morph
  • Pulse
  • Sandstone
  • Simplex
  • Sketchy
  • Spacelab
  • United
  • Yeti
  • Zephyr
  • Dark
  • Cyborg
  • Darkly
  • Quartz
  • Slate
  • Solar
  • Superhero
  • Vapor

  • Default (No Skin)
  • No Skin
Collapse
Code Project
  1. Home
  2. Other Discussions
  3. Clever Code
  4. I hate floating point operations

I hate floating point operations

Scheduled Pinned Locked Moved Clever Code
c++comquestion
63 Posts 24 Posters 0 Views 1 Watching
  • Oldest to Newest
  • Newest to Oldest
  • Most Votes
Reply
  • Reply as topic
Log in to reply
This topic has been deleted. Only users with topic management privileges can see it.
  • T Tim Smith

    Read any book on the issues of floating point math and it will tell you that floating point addition is inherently more imprecise that floating point multiplication. For example, this is bad. You accumulate small error all the time: float x = 10; for (i = 0; i < 1000; i++) x += 0.05; This is much better but can still have a problem with the addition: float x = 10; for (i = 0; i < 1000; i++) float x1 = x + (i * 0.05);

    Tim Smith I'm going to patent thought. I have yet to see any prior art.

    K Offline
    K Offline
    KaRl
    wrote on last edited by
    #27

    Tim Smith wrote:

    it will tell you that floating point addition is inherently more imprecise that floating point multiplication

    It works only if one of the term of the multiplication is an integer. :~ I understand it's like incertitude calculation, for addition you sum absolute incertitudes, for multiplication you sum relative ones.


    Where do you expect us to go when the bombs fall?

    Fold with us! ¤ flickr

    T 1 Reply Last reply
    0
    • K KaRl

      <Using MFC> double dValue = atof("0.1"); ASSERT(dValue == 0.1); double dSecondValue = (1 + dValue + dValue + dValue + dValue); ASSERT(dSecondValue == 1.4); // Crash


      Where do you expect us to go when the bombs fall?

      Fold with us! ¤ flickr

      B Offline
      B Offline
      Bassam Abdul Baki
      wrote on last edited by
      #28

      If you've ever studied the harmonic series (1 + 1/2 + 1/3 + ...), you'll know that it diverges very slowly. However, if you use any calculator to sum it up, you'll think that it converges. When I was still a pre-engineering student, before I switched to Math, we were taught to start any summand from the lower end and add up the bigger numbers so that the round-off error is minimized. In this case, you need to add the 1 last, but not after subtracting the integer part, multiplying the decimal part by a power of ten and rounding it to see if it is still zero or one depending on what the value is exactly and then doing the compare. Since everybody here has been pulling your leg and giving you grief, I'll keep my jokes to myself. Yeah, I don't have any anyway. :) It's annoying as a developer that you actually have to write a check to make sure your numbers are what they should be. Makes you wonder why we use machines for calculations and why cororations aren't going broke because of them. Hmmm, maybe they round to their benefit and that's why people are going broke and they're doing very well. :rolleyes: I'm in the wrong business.


      "This perpetual motion machine she made is a joke. It just keeps going faster and faster. Lisa, get in here! In this house, we obey the laws of thermodynamics!" - Homer Simpson Web - Blog - RSS - Math - LinkedIn - BM

      1 Reply Last reply
      0
      • P PIEBALDconsult

        A) Never hate B) Don't expect them to do what they can't

        1 Offline
        1 Offline
        123 0
        wrote on last edited by
        #29

        PIEBALDconsult wrote:

        Never hate

        Better advice from Amos (5:15): "Hate the evil, and love the good."

        1 Reply Last reply
        0
        • C Chris Maunder

          I really do think the compiler should throw an error when you try to compare floating point values for equality.

          cheers, Chris Maunder

          CodeProject.com : C++ MVP

          1 Offline
          1 Offline
          123 0
          wrote on last edited by
          #30

          Chris Maunder wrote:

          I really do think the compiler should throw an error when you try to compare floating point values for equality.

          It seems to me that a data type where the concept of equal values is either undefined or can't be practically determined is clearly "half-baked".

          C 1 Reply Last reply
          0
          • K KaRl

            <Using MFC> double dValue = atof("0.1"); ASSERT(dValue == 0.1); double dSecondValue = (1 + dValue + dValue + dValue + dValue); ASSERT(dSecondValue == 1.4); // Crash


            Where do you expect us to go when the bombs fall?

            Fold with us! ¤ flickr

            1 Offline
            1 Offline
            123 0
            wrote on last edited by
            #31

            K(arl) wrote:

            I hate floating point operations

            So do we. Any data type where "equality of values" is ill-defined is clearly half-baked. Someone should have put a more thought and less transistors into the matter.

            S T A 3 Replies Last reply
            0
            • 1 123 0

              Chris Maunder wrote:

              I really do think the compiler should throw an error when you try to compare floating point values for equality.

              It seems to me that a data type where the concept of equal values is either undefined or can't be practically determined is clearly "half-baked".

              C Offline
              C Offline
              Chris Maunder
              wrote on last edited by
              #32

              You think maybe we should instead store the value as "one-point-five"? Or even "three-point-one-four-one-five-nine..."?

              cheers, Chris Maunder

              CodeProject.com : C++ MVP

              1 1 Reply Last reply
              0
              • C Chris Maunder

                You think maybe we should instead store the value as "one-point-five"? Or even "three-point-one-four-one-five-nine..."?

                cheers, Chris Maunder

                CodeProject.com : C++ MVP

                1 Offline
                1 Offline
                123 0
                wrote on last edited by
                #33

                Chris Maunder wrote:

                You think maybe we should instead store the value as "one-point-five"? Or even "three-point-one-four-one-five-nine..."?

                Ha, ha. Actually, Plain English uses ratios for both rational numbers and for reasonable approximations of irrational numbers (curious name, don't you think?). For example, we typically use 355/113 for pi. It has been our experience that rounding errors can be more easily minimized with the ratio approach than floating-point numbers. Plain English also supports scaled integers which, on a 32-bit machine, are sufficient for all but the most demanding problems and which, on a 64-bit machine, should suffice for nearly everything else. In either case, the concept of "equal values" can be defined and implemented with rigor, consistency, and reliability. These are desirable things, yes? And since adopting this approach eliminates an entire processor from the machine, it appears to be a significantly more efficient approach, as well. Not to mention less noisy. Our objections to "floating point" or "real" numbers - besides those stated above - are two. First, they suggest a "continuous" universe, rather than a discrete one. Yet we know that electrons are never between shells and that that famous arrow really does get where it's going. See [^] for a digital view of the universe. Secondly, as popular as the metric system may be in many countries, we find it much less effective in everyday life than the English system. If you're not hungry enough for a whole piece of pie, for example, do you typically ask for a half, or a tenth? Do you double an estimate you're not sure about, or multiply it by ten? In other words, people - who have not been trained otherwise - naturally think in halves and wholes, not tenths and hundredths.

                C 1 Reply Last reply
                0
                • 1 123 0

                  Chris Maunder wrote:

                  You think maybe we should instead store the value as "one-point-five"? Or even "three-point-one-four-one-five-nine..."?

                  Ha, ha. Actually, Plain English uses ratios for both rational numbers and for reasonable approximations of irrational numbers (curious name, don't you think?). For example, we typically use 355/113 for pi. It has been our experience that rounding errors can be more easily minimized with the ratio approach than floating-point numbers. Plain English also supports scaled integers which, on a 32-bit machine, are sufficient for all but the most demanding problems and which, on a 64-bit machine, should suffice for nearly everything else. In either case, the concept of "equal values" can be defined and implemented with rigor, consistency, and reliability. These are desirable things, yes? And since adopting this approach eliminates an entire processor from the machine, it appears to be a significantly more efficient approach, as well. Not to mention less noisy. Our objections to "floating point" or "real" numbers - besides those stated above - are two. First, they suggest a "continuous" universe, rather than a discrete one. Yet we know that electrons are never between shells and that that famous arrow really does get where it's going. See [^] for a digital view of the universe. Secondly, as popular as the metric system may be in many countries, we find it much less effective in everyday life than the English system. If you're not hungry enough for a whole piece of pie, for example, do you typically ask for a half, or a tenth? Do you double an estimate you're not sure about, or multiply it by ten? In other words, people - who have not been trained otherwise - naturally think in halves and wholes, not tenths and hundredths.

                  C Offline
                  C Offline
                  Chris Maunder
                  wrote on last edited by
                  #34

                  There are an awful lot of irrational numbers out there. I think I would rather have my planes and bridges built using a floating point approximation of PI rather than 355/113 >In either case, the concept of "equal values" can be defined and implemented with rigor, consistency, and reliability. It's a pity these concepts don't actually appear in real life. You postulate that the universe isn't continous but is discrete, implying you believ in quantum theory, yet the basis of quantum theory itself is that there is an inherent uncertainty in all measurements. >Secondly, as popular as the metric system may be in many countries, we find it much less effective in everyday life than the English system. That's because you live in a country that uses the Imperial system. I buy food that is weighed in grams, and buy petrol and milk in litres, and need to know how many kilometres there are till my turnoff. If I ever talk in halves or quarters I mean it in a vague way ("half a loaf of bread, please") and there is no need for accuracy.

                  cheers, Chris Maunder

                  CodeProject.com : C++ MVP

                  1 J 2 Replies Last reply
                  0
                  • C Chris Maunder

                    There are an awful lot of irrational numbers out there. I think I would rather have my planes and bridges built using a floating point approximation of PI rather than 355/113 >In either case, the concept of "equal values" can be defined and implemented with rigor, consistency, and reliability. It's a pity these concepts don't actually appear in real life. You postulate that the universe isn't continous but is discrete, implying you believ in quantum theory, yet the basis of quantum theory itself is that there is an inherent uncertainty in all measurements. >Secondly, as popular as the metric system may be in many countries, we find it much less effective in everyday life than the English system. That's because you live in a country that uses the Imperial system. I buy food that is weighed in grams, and buy petrol and milk in litres, and need to know how many kilometres there are till my turnoff. If I ever talk in halves or quarters I mean it in a vague way ("half a loaf of bread, please") and there is no need for accuracy.

                    cheers, Chris Maunder

                    CodeProject.com : C++ MVP

                    1 Offline
                    1 Offline
                    123 0
                    wrote on last edited by
                    #35

                    Chris Maunder wrote:

                    I think I would rather have my planes and bridges built using a floating point approximation of PI rather than 355/113

                    Moot point. There's a ratio for whatever degree of precision you desire. But if you're willing to trust your planes and bridges to floating point calculations, why not your money?

                    Chris Maunder wrote:

                    It's a pity these concepts don't actually appear in real life.

                    The concept I was referring to was "the equality of values in a given data type". Which can be acheived - with enough rigor, consistency, and reliablilty for practical use - in spite of any quantum uncertainty.

                    Chris Maunder wrote:

                    If I ever talk in halves or quarters I mean it in a vague way ("half a loaf of bread, please") and there is no need for accuracy.

                    But do you ever say, "A tenth of a loaf of bread, please"? I think not, because when a whole loaf is too much, your next thought - even though you (apparently) were not brought up on the Imperial system - is half a loaf, not 1/10. It's only human. Another curious example is how Americans, even though our dollars are divided into 100ths, consistently and persistently use phrases like "a half dollar" or "a quarter" - instead of the metric "fifty cent piece" or "twenty-five cent piece".

                    C 1 Reply Last reply
                    0
                    • 1 123 0

                      Chris Maunder wrote:

                      I think I would rather have my planes and bridges built using a floating point approximation of PI rather than 355/113

                      Moot point. There's a ratio for whatever degree of precision you desire. But if you're willing to trust your planes and bridges to floating point calculations, why not your money?

                      Chris Maunder wrote:

                      It's a pity these concepts don't actually appear in real life.

                      The concept I was referring to was "the equality of values in a given data type". Which can be acheived - with enough rigor, consistency, and reliablilty for practical use - in spite of any quantum uncertainty.

                      Chris Maunder wrote:

                      If I ever talk in halves or quarters I mean it in a vague way ("half a loaf of bread, please") and there is no need for accuracy.

                      But do you ever say, "A tenth of a loaf of bread, please"? I think not, because when a whole loaf is too much, your next thought - even though you (apparently) were not brought up on the Imperial system - is half a loaf, not 1/10. It's only human. Another curious example is how Americans, even though our dollars are divided into 100ths, consistently and persistently use phrases like "a half dollar" or "a quarter" - instead of the metric "fifty cent piece" or "twenty-five cent piece".

                      C Offline
                      C Offline
                      Chris Maunder
                      wrote on last edited by
                      #36

                      The Grand Negus wrote:

                      Moot point. There's a ratio for whatever degree of precision you desire.

                      Are you seriously suggesting that all numerical calculations, eg numerical modelling, be done by first generating ratios for every floating point value needed, to a degree of accuracy that is as good as, or better than current floating point accuracy, and then propogate those fractions throughout the entire set of calculations? Why? What will it save you? You're not going to gain accuracy because you've already made an appoximation. You're going to have all sorts of problems with overflow. And in the end the value you give back to the person modelling, say, forces in a wingspan isn't going to be 982349587/6834567 Nm^2, it's going to be 143.73. That's an awfully large roundabout you're taking to come up with the same answer.

                      The Grand Negus wrote:

                      But do you ever say, "A tenth of a loaf of bread, please"? I think not,

                      No, but every time I go to the butchers I ask for 200g of sliced ham. If your computations-using-fractions works for you then perfect. You may consider floating point storage a bad solution, and you've offered an alternate which is commendable. But I honestly do not think it's practical. Not for the things such as forecasting weather or perform amazing feats of engineering.

                      cheers, Chris Maunder

                      CodeProject.com : C++ MVP

                      1 1 Reply Last reply
                      0
                      • C Chris Maunder

                        The Grand Negus wrote:

                        Moot point. There's a ratio for whatever degree of precision you desire.

                        Are you seriously suggesting that all numerical calculations, eg numerical modelling, be done by first generating ratios for every floating point value needed, to a degree of accuracy that is as good as, or better than current floating point accuracy, and then propogate those fractions throughout the entire set of calculations? Why? What will it save you? You're not going to gain accuracy because you've already made an appoximation. You're going to have all sorts of problems with overflow. And in the end the value you give back to the person modelling, say, forces in a wingspan isn't going to be 982349587/6834567 Nm^2, it's going to be 143.73. That's an awfully large roundabout you're taking to come up with the same answer.

                        The Grand Negus wrote:

                        But do you ever say, "A tenth of a loaf of bread, please"? I think not,

                        No, but every time I go to the butchers I ask for 200g of sliced ham. If your computations-using-fractions works for you then perfect. You may consider floating point storage a bad solution, and you've offered an alternate which is commendable. But I honestly do not think it's practical. Not for the things such as forecasting weather or perform amazing feats of engineering.

                        cheers, Chris Maunder

                        CodeProject.com : C++ MVP

                        1 Offline
                        1 Offline
                        123 0
                        wrote on last edited by
                        #37

                        Chris Maunder wrote:

                        You may consider floating point storage a bad solution,

                        (1) We think it's ill-defined - "equal" should be a reasonable operator with any numeric data type. (2) We think it's limited in applicability, even in cases where one would think it would work - like money. (3) We think it's expensive - an entire second processor to do the job.

                        Chris Maunder wrote:

                        If your computations-using-fractions works for you then perfect.

                        They do, in many cases. In other cases, we used scaled integers (which you seem to have forgotten about). And we really believe that 64-bit scaled integers are a much better solution for most problems than floating point. (1) you can tell when they're equal; (2) you can use them everywhere, even for money; and (3) they don't require a separate processor. If that isn't enough for you, then I guess Occam is dead in more ways than one...

                        Chris Maunder wrote:

                        But I honestly do not think it's practical. Not for the things such as forecasting weather or perform amazing feats of engineering.

                        Regarding the weather. In our view, this problem, like the Traveling Salesman Problem, is not effectively solved using a computational approach. A school child with a tiny bit of training can beat the most robust weather-prediction system with just a glance at the maps from preceding days (and without real numbers at all); this problem is better solved using human-like techniques. Regarding "feats of engineering". Almost all of the early satellites were programmed in FORTH with scaled integers. Isn't a satellite a "feat of engineering"? And what modern skyscraper, submarine, or jet plane couldn't be build with 64-bit scaled integers?

                        Chris Maunder wrote:

                        No, but every time I go to the butchers I ask for 200g of sliced ham.

                        Probably force of habit. I, of course, say, "a pound" or "a half pound" or "a quarter pound". But I suspect you don't ever say, "192 grams" or "214 grams", illustrating the point that the unit of measure forced on you from your youth is much too specific for the job - too much accuracy is an inconvenience. Y'know, Chris, we expect a closed-minded, defensive posture from some of the others here, but I really thought we'd find a bit more understanding "at the top". Everybody knows floating point representation is

                        C J 2 Replies Last reply
                        0
                        • 1 123 0

                          Chris Maunder wrote:

                          You may consider floating point storage a bad solution,

                          (1) We think it's ill-defined - "equal" should be a reasonable operator with any numeric data type. (2) We think it's limited in applicability, even in cases where one would think it would work - like money. (3) We think it's expensive - an entire second processor to do the job.

                          Chris Maunder wrote:

                          If your computations-using-fractions works for you then perfect.

                          They do, in many cases. In other cases, we used scaled integers (which you seem to have forgotten about). And we really believe that 64-bit scaled integers are a much better solution for most problems than floating point. (1) you can tell when they're equal; (2) you can use them everywhere, even for money; and (3) they don't require a separate processor. If that isn't enough for you, then I guess Occam is dead in more ways than one...

                          Chris Maunder wrote:

                          But I honestly do not think it's practical. Not for the things such as forecasting weather or perform amazing feats of engineering.

                          Regarding the weather. In our view, this problem, like the Traveling Salesman Problem, is not effectively solved using a computational approach. A school child with a tiny bit of training can beat the most robust weather-prediction system with just a glance at the maps from preceding days (and without real numbers at all); this problem is better solved using human-like techniques. Regarding "feats of engineering". Almost all of the early satellites were programmed in FORTH with scaled integers. Isn't a satellite a "feat of engineering"? And what modern skyscraper, submarine, or jet plane couldn't be build with 64-bit scaled integers?

                          Chris Maunder wrote:

                          No, but every time I go to the butchers I ask for 200g of sliced ham.

                          Probably force of habit. I, of course, say, "a pound" or "a half pound" or "a quarter pound". But I suspect you don't ever say, "192 grams" or "214 grams", illustrating the point that the unit of measure forced on you from your youth is much too specific for the job - too much accuracy is an inconvenience. Y'know, Chris, we expect a closed-minded, defensive posture from some of the others here, but I really thought we'd find a bit more understanding "at the top". Everybody knows floating point representation is

                          C Offline
                          C Offline
                          Chris Maunder
                          wrote on last edited by
                          #38

                          The Grand Negus wrote:

                          In other cases, we used scaled integers (which you seem to have forgotten about).

                          Nope, not forgotten them. Just concentrating on the floating point vs. Fractional representation discussion.

                          The Grand Negus wrote:

                          Regarding the weather. In our view, this problem, like the Traveling Salesman Problem, is not effectively solved using a computational approach. A school child with a tiny bit of training can beat the most robust weather-prediction system with just a glance at the maps from preceding days (and without real numbers at all); this problem is better solved using human-like techniques.

                          For tomorrow, maybe. Not for 3 days ahead or 5 or 7 days. I would hazard a guess that the finite element modelling I was doing in my post-grad work that involved tens of thousands of elements over hundreds of thousands of timesteps would probably be beyond a school child and would also be beyond fractional numerical representation. If you feel problems such as forecasting the weather are not suitable problems for computers then what is the alternative?

                          The Grand Negus wrote:

                          Everybody knows that serious flaws have been found in production versions of floating point chips!

                          The pentium bug? And? They are just as likely to find a bug in the integer processing unit.

                          The Grand Negus wrote:

                          And everybody knows that it takes an entire supplemental processor to implement the feature.

                          Again - and? We have supplemental graphics processors, integer processes. Machines have supplemental vector processing unit for processing arrays of data in a single step. How does this contribute to floating point representation being a bad thing?

                          The Grand Negus wrote:

                          Y'know, Chris, we expect a closed-minded, defensive posture from some of the others here, but I really thought we'd find a bit more understanding "at the top".

                          This is a debate. In a debate you take a stand on a point of view and argue it. Throwing out a "you're being small minded" shot is a bit defensive isn't it? You proposed an alternative and so I have chosen to take the opposing view that your solution is not workable and that floating point is better. I didn't say floating point is the ultimate answer. I'm fully aware of its problems. And

                          1 1 Reply Last reply
                          0
                          • C Chris Maunder

                            The Grand Negus wrote:

                            In other cases, we used scaled integers (which you seem to have forgotten about).

                            Nope, not forgotten them. Just concentrating on the floating point vs. Fractional representation discussion.

                            The Grand Negus wrote:

                            Regarding the weather. In our view, this problem, like the Traveling Salesman Problem, is not effectively solved using a computational approach. A school child with a tiny bit of training can beat the most robust weather-prediction system with just a glance at the maps from preceding days (and without real numbers at all); this problem is better solved using human-like techniques.

                            For tomorrow, maybe. Not for 3 days ahead or 5 or 7 days. I would hazard a guess that the finite element modelling I was doing in my post-grad work that involved tens of thousands of elements over hundreds of thousands of timesteps would probably be beyond a school child and would also be beyond fractional numerical representation. If you feel problems such as forecasting the weather are not suitable problems for computers then what is the alternative?

                            The Grand Negus wrote:

                            Everybody knows that serious flaws have been found in production versions of floating point chips!

                            The pentium bug? And? They are just as likely to find a bug in the integer processing unit.

                            The Grand Negus wrote:

                            And everybody knows that it takes an entire supplemental processor to implement the feature.

                            Again - and? We have supplemental graphics processors, integer processes. Machines have supplemental vector processing unit for processing arrays of data in a single step. How does this contribute to floating point representation being a bad thing?

                            The Grand Negus wrote:

                            Y'know, Chris, we expect a closed-minded, defensive posture from some of the others here, but I really thought we'd find a bit more understanding "at the top".

                            This is a debate. In a debate you take a stand on a point of view and argue it. Throwing out a "you're being small minded" shot is a bit defensive isn't it? You proposed an alternative and so I have chosen to take the opposing view that your solution is not workable and that floating point is better. I didn't say floating point is the ultimate answer. I'm fully aware of its problems. And

                            1 Offline
                            1 Offline
                            123 0
                            wrote on last edited by
                            #39

                            Chris Maunder wrote:

                            Just concentrating on the floating point vs. Fractional representation discussion.

                            Which is unproductive, relative to this discussion, because I didn't suggest that we all dump floating point in favor of ratios alone; I suggested that we dump floating point in favor of a combination of ratios and/or scaled integers, whichever is more appropriate for the job at hand.

                            Chris Maunder wrote:

                            For tomorrow, maybe. Not for 3 days ahead or 5 or 7 days.

                            I question this. Show me the maps and I'll give you a 7-day forecast that will rival the computer. And I won't use any real numbers to do it.

                            Chris Maunder wrote:

                            I would hazard a guess that the finite element modelling I was doing in my post-grad work that involved tens of thousands of elements over hundreds of thousands of timesteps would probably be beyond a school child

                            Of course it would be. But recall the story of the Ford engineer who was struggling to calculate the volume of an oddly-shaped fuel tank when Henry came by and - just before firing the guy - filled the tank with water and poured the contents into a graduated container. Grounds for dismissal? Doing something the hard way.

                            Chris Maunder wrote:

                            If you feel problems such as forecasting the weather are not suitable problems for computers then what is the alternative?

                            I didn't say they weren't suitable problems for computers, I said that "computational solutions" were not the most effective for some of these tasks. Instead, we need to teach our computers to think like humans. We're working right now, for example, on a non-computational, human-like approach to the Traveling Salesman Problem that we intend to publish in the next couple of weeks. The algorithm can be described on one page, can be easily understood by a child, and the implementation requires less than 100 lines of Plain English code. Yet it rivals, in both speed and accuracy, routines devised by teams of PhDs that are described in virtually unreadable 20-page dissertations.

                            Chris Maunder wrote:

                            The pentium bug? And? They are just as likely to find a bug in the integer processing unit.

                            Not so. Integer processing units are (1) simpler to design, (2) simpler to manufacture, and (3) simpler to test; it is therefore

                            1 Reply Last reply
                            0
                            • K Kochise

                              Try this, this is what I use in every of my code :

                              double dValue = atof("0.1");
                              double dTest = 0.1;
                              ASSERT
                              (
                              ((*((LONGLONG*)&dValue))&0xFFFFFFFFFFFFFF00)
                              == ((*((LONGLONG*)&dTest)) &0xFFFFFFFFFFFFFF00)
                              );

                              double dSecondValue = (1 + dValue + dValue + dValue + dValue);
                              double dTest2 = 1.4;
                              ASSERT
                              (
                              (*((LONGLONG*)&dSecondValue)&0xFFFFFFFFFFFFFF00)
                              == (*((LONGLONG*)&dTest2) &0xFFFFFFFFFFFFFF00)
                              ); // *NO* Crash

                              By reducing mantissa's complexity (skiping lasting bits) by an interger cast (mostly like an union over a double), you can do some pretty decent comparison with no headache... By using float (4 bytes) instead, you could simply things to :

                              float dValue = atof("0.1");
                              float dTest = 0.1;
                              ASSERT
                              (
                              ((*((int*)&dValue))&0xFFFFFFF0)
                              == ((*((int*)&dTest)) &0xFFFFFFF0)
                              );

                              float dSecondValue = (1 + dValue + dValue + dValue + dValue);
                              float dTest2 = 1.4;
                              ASSERT
                              (
                              (*((int*)&dSecondValue)&0xFFFFFFF0)
                              == (*((int*)&dTest2) &0xFFFFFFF0)
                              ); // *NO* Crash

                              The problem comes mostly because the preprocessor code which convert double dTest = 0.1 is *NOT* the same than the code within ATOF which convert double dValue = atof("0.1"). So you don't get a bitwise exact match of the value, only a close approximation. By using the cast technique, you : 1- can control over how many bits how want to perform the comparison 2- do a full integer comparison, which is faster by far than loading floating point registers to do the same 3- etc... So define the following macros :

                              #define DCMP(x,y) ((*((LONGLONG*)&x))&0xFFFFFFFFFFFFFF00)==((*((LONGLONG*)&y))&0xFFFFFFFFFFFFFF00)
                              #define FCMP(x,y) (*((int*)&x)&0xFFFFFFF0)==(*((int*)&y)&0xFFFFFFF0)

                              Use DCMP on double, and FCMP on float... But beware, you cannot do that :

                              ASSERT(DCMP(atof("0.1"),0.1)); // atof returns a value which have to be stored...

                              The following code works :

                              #define FCMP(x,y) (*((int*)&x)&0xFFFFF000)==(*((int*)&y)&0xFFFFF000)

                              float dSecondValue = atof("1.4"); // RAW : 0x3FB332DF
                              float dTest2 = 1.39999; // RAW : 0x3FB33333, last 12 bits are differents, so don't compare them
                              ASSERT(FCMP(dSecondValue,dTest2)); // *NO* Crash

                              Kochise EDIT : you may have used a memcmp approach, which is similar in functionality, but you can only test on byte boundaries (base of lenght of comparison is byte) and x86 is little endian, so you start comparing the different bytes first,

                              K Offline
                              K Offline
                              KaRl
                              wrote on last edited by
                              #40

                              If I understand well, you put the 'epsilon' in the filtering bytes, you gain this way the call to fabs. True, it works only in a certain domain of hypothesis, but it goes faster. Very interesting indeed :)


                              Where do you expect us to go when the bombs fall?

                              Fold with us! ¤ flickr

                              1 Reply Last reply
                              0
                              • 1 123 0

                                K(arl) wrote:

                                I hate floating point operations

                                So do we. Any data type where "equality of values" is ill-defined is clearly half-baked. Someone should have put a more thought and less transistors into the matter.

                                S Offline
                                S Offline
                                Shog9 0
                                wrote on last edited by
                                #41

                                The Grand Negus wrote:

                                Any data type where "equality of values" is ill-defined is clearly half-baked.

                                The degree of precision needed is a problem-specific variable. It doesn't matter what datatype you end up using to represent your numbers internally, if you haven't agreed on a consistent precision (and associated definition for equality) for you project, you are going to have problems. The same problems you'd have without any computers involved at all... People get into trouble using floating-point variables for the same reasons they get into trouble using integer variables or trying to out-run the police in their pickup trucks: incorrect assumptions about the capabilities of their tools.

                                ---- Do you see what i see? Why do we live like this? Is it because it's true... ...That ignorance is bliss?

                                1 Reply Last reply
                                0
                                • K KaRl

                                  <Using MFC> double dValue = atof("0.1"); ASSERT(dValue == 0.1); double dSecondValue = (1 + dValue + dValue + dValue + dValue); ASSERT(dSecondValue == 1.4); // Crash


                                  Where do you expect us to go when the bombs fall?

                                  Fold with us! ¤ flickr

                                  S Offline
                                  S Offline
                                  Shog9 0
                                  wrote on last edited by
                                  #42

                                  Me too. I much prefer fixed-point operations.... right up until the range of values exceeds the possible precision. Then i hate them even more than floating point ops...

                                  ---- Do you see what i see? Why do we live like this? Is it because it's true... ...That ignorance is bliss?

                                  1 Reply Last reply
                                  0
                                  • K KaRl

                                    <Using MFC> double dValue = atof("0.1"); ASSERT(dValue == 0.1); double dSecondValue = (1 + dValue + dValue + dValue + dValue); ASSERT(dSecondValue == 1.4); // Crash


                                    Where do you expect us to go when the bombs fall?

                                    Fold with us! ¤ flickr

                                    P Offline
                                    P Offline
                                    peterchen
                                    wrote on last edited by
                                    #43

                                    Floating point values stump the brightest. A week or two ago, I had an argument with an quite bright student, I only barely won :cool:


                                    Developers, Developers, Developers, Developers, Developers, Developers, Velopers, Develprs, Developers!
                                    We are a big screwed up dysfunctional psychotic happy family - some more screwed up, others more happy, but everybody's psychotic joint venture definition of CP
                                    Linkify!|Fold With Us!

                                    K 1 Reply Last reply
                                    0
                                    • K KaRl

                                      Tim Smith wrote:

                                      it will tell you that floating point addition is inherently more imprecise that floating point multiplication

                                      It works only if one of the term of the multiplication is an integer. :~ I understand it's like incertitude calculation, for addition you sum absolute incertitudes, for multiplication you sum relative ones.


                                      Where do you expect us to go when the bombs fall?

                                      Fold with us! ¤ flickr

                                      T Offline
                                      T Offline
                                      Tim Smith
                                      wrote on last edited by
                                      #44

                                      WTF are you talking about? Please read about floating addition and multiplication. http://en.wikipedia.org/wiki/Floating_point#Floating_point_arithmetic_operations[^] Even though both suffer from rounding problems, multiplication doesn't suffer from "cancellation or absorption problems". I have run into many instances where addition based algorithms had huge precision problems that were eliminated by recoding the software to be more multiplication based.

                                      Tim Smith I'm going to patent thought. I have yet to see any prior art.

                                      K 1 Reply Last reply
                                      0
                                      • K Kochise

                                        My macro can be of great help if you know where you put your foot. Eg when dealing with strict positive numbers set, or strict negative numbers set, without mixing the two. However the test case only works with 0xFF... values padded with 0, not like your 0xFFFFFEFF example. I think you wanted to say 0xFFFFFE00 which is correct :) Kochise PS : If I remember right, there is a 'magical trick' explained in a raticle on CP which explain how to cast double to float and the way back only using integer operations, and it works pretty well and fast, and also deals with the sign...

                                        In Code we trust !

                                        T Offline
                                        T Offline
                                        Tim Smith
                                        wrote on last edited by
                                        #45

                                        No, I said what I meant. I gave you two hex representations of two almost equal floating point numbers that your system fails to detect. I also pointed out a large number of other problems. Even if you just limit your algorithm to positive numbers (0x000000FF and 0x00000100 for example), you algorithm fails.

                                        Tim Smith I'm going to patent thought. I have yet to see any prior art.

                                        K 1 Reply Last reply
                                        0
                                        • 1 123 0

                                          K(arl) wrote:

                                          I hate floating point operations

                                          So do we. Any data type where "equality of values" is ill-defined is clearly half-baked. Someone should have put a more thought and less transistors into the matter.

                                          T Offline
                                          T Offline
                                          Tim Smith
                                          wrote on last edited by
                                          #46

                                          Please write a paper on your perfect number system that can properly represent imaginary numbers and I am sure you will be a millionaire.

                                          Tim Smith I'm going to patent thought. I have yet to see any prior art.

                                          1 Reply Last reply
                                          0
                                          Reply
                                          • Reply as topic
                                          Log in to reply
                                          • Oldest to Newest
                                          • Newest to Oldest
                                          • Most Votes


                                          • Login

                                          • Don't have an account? Register

                                          • Login or register to search.
                                          • First post
                                            Last post
                                          0
                                          • Categories
                                          • Recent
                                          • Tags
                                          • Popular
                                          • World
                                          • Users
                                          • Groups