I hate floating point operations
-
The Grand Negus wrote:
In other cases, we used scaled integers (which you seem to have forgotten about).
Nope, not forgotten them. Just concentrating on the floating point vs. Fractional representation discussion.
The Grand Negus wrote:
Regarding the weather. In our view, this problem, like the Traveling Salesman Problem, is not effectively solved using a computational approach. A school child with a tiny bit of training can beat the most robust weather-prediction system with just a glance at the maps from preceding days (and without real numbers at all); this problem is better solved using human-like techniques.
For tomorrow, maybe. Not for 3 days ahead or 5 or 7 days. I would hazard a guess that the finite element modelling I was doing in my post-grad work that involved tens of thousands of elements over hundreds of thousands of timesteps would probably be beyond a school child and would also be beyond fractional numerical representation. If you feel problems such as forecasting the weather are not suitable problems for computers then what is the alternative?
The Grand Negus wrote:
Everybody knows that serious flaws have been found in production versions of floating point chips!
The pentium bug? And? They are just as likely to find a bug in the integer processing unit.
The Grand Negus wrote:
And everybody knows that it takes an entire supplemental processor to implement the feature.
Again - and? We have supplemental graphics processors, integer processes. Machines have supplemental vector processing unit for processing arrays of data in a single step. How does this contribute to floating point representation being a bad thing?
The Grand Negus wrote:
Y'know, Chris, we expect a closed-minded, defensive posture from some of the others here, but I really thought we'd find a bit more understanding "at the top".
This is a debate. In a debate you take a stand on a point of view and argue it. Throwing out a "you're being small minded" shot is a bit defensive isn't it? You proposed an alternative and so I have chosen to take the opposing view that your solution is not workable and that floating point is better. I didn't say floating point is the ultimate answer. I'm fully aware of its problems. And
Chris Maunder wrote:
Just concentrating on the floating point vs. Fractional representation discussion.
Which is unproductive, relative to this discussion, because I didn't suggest that we all dump floating point in favor of ratios alone; I suggested that we dump floating point in favor of a combination of ratios and/or scaled integers, whichever is more appropriate for the job at hand.
Chris Maunder wrote:
For tomorrow, maybe. Not for 3 days ahead or 5 or 7 days.
I question this. Show me the maps and I'll give you a 7-day forecast that will rival the computer. And I won't use any real numbers to do it.
Chris Maunder wrote:
I would hazard a guess that the finite element modelling I was doing in my post-grad work that involved tens of thousands of elements over hundreds of thousands of timesteps would probably be beyond a school child
Of course it would be. But recall the story of the Ford engineer who was struggling to calculate the volume of an oddly-shaped fuel tank when Henry came by and - just before firing the guy - filled the tank with water and poured the contents into a graduated container. Grounds for dismissal? Doing something the hard way.
Chris Maunder wrote:
If you feel problems such as forecasting the weather are not suitable problems for computers then what is the alternative?
I didn't say they weren't suitable problems for computers, I said that "computational solutions" were not the most effective for some of these tasks. Instead, we need to teach our computers to think like humans. We're working right now, for example, on a non-computational, human-like approach to the Traveling Salesman Problem that we intend to publish in the next couple of weeks. The algorithm can be described on one page, can be easily understood by a child, and the implementation requires less than 100 lines of Plain English code. Yet it rivals, in both speed and accuracy, routines devised by teams of PhDs that are described in virtually unreadable 20-page dissertations.
Chris Maunder wrote:
The pentium bug? And? They are just as likely to find a bug in the integer processing unit.
Not so. Integer processing units are (1) simpler to design, (2) simpler to manufacture, and (3) simpler to test; it is therefore
-
Try this, this is what I use in every of my code :
double dValue = atof("0.1");
double dTest = 0.1;
ASSERT
(
((*((LONGLONG*)&dValue))&0xFFFFFFFFFFFFFF00)
== ((*((LONGLONG*)&dTest)) &0xFFFFFFFFFFFFFF00)
);double dSecondValue = (1 + dValue + dValue + dValue + dValue);
double dTest2 = 1.4;
ASSERT
(
(*((LONGLONG*)&dSecondValue)&0xFFFFFFFFFFFFFF00)
== (*((LONGLONG*)&dTest2) &0xFFFFFFFFFFFFFF00)
); // *NO* CrashBy reducing mantissa's complexity (skiping lasting bits) by an interger cast (mostly like an union over a double), you can do some pretty decent comparison with no headache... By using float (4 bytes) instead, you could simply things to :
float dValue = atof("0.1");
float dTest = 0.1;
ASSERT
(
((*((int*)&dValue))&0xFFFFFFF0)
== ((*((int*)&dTest)) &0xFFFFFFF0)
);float dSecondValue = (1 + dValue + dValue + dValue + dValue);
float dTest2 = 1.4;
ASSERT
(
(*((int*)&dSecondValue)&0xFFFFFFF0)
== (*((int*)&dTest2) &0xFFFFFFF0)
); // *NO* CrashThe problem comes mostly because the preprocessor code which convert
double dTest = 0.1
is *NOT* the same than the code within ATOF which convertdouble dValue = atof("0.1")
. So you don't get a bitwise exact match of the value, only a close approximation. By using the cast technique, you : 1- can control over how many bits how want to perform the comparison 2- do a full integer comparison, which is faster by far than loading floating point registers to do the same 3- etc... So define the following macros :#define DCMP(x,y) ((*((LONGLONG*)&x))&0xFFFFFFFFFFFFFF00)==((*((LONGLONG*)&y))&0xFFFFFFFFFFFFFF00)
#define FCMP(x,y) (*((int*)&x)&0xFFFFFFF0)==(*((int*)&y)&0xFFFFFFF0)Use
DCMP
on double, andFCMP
on float... But beware, you cannot do that :ASSERT(DCMP(atof("0.1"),0.1)); // atof returns a value which have to be stored...
The following code works :
#define FCMP(x,y) (*((int*)&x)&0xFFFFF000)==(*((int*)&y)&0xFFFFF000)
float dSecondValue = atof("1.4"); // RAW : 0x3FB332DF
float dTest2 = 1.39999; // RAW : 0x3FB33333, last 12 bits are differents, so don't compare them
ASSERT(FCMP(dSecondValue,dTest2)); // *NO* CrashKochise EDIT : you may have used a
memcmp
approach, which is similar in functionality, but you can only test on byte boundaries (base of lenght of comparison is byte) and x86 is little endian, so you start comparing the different bytes first,If I understand well, you put the 'epsilon' in the filtering bytes, you gain this way the call to
fabs
. True, it works only in a certain domain of hypothesis, but it goes faster. Very interesting indeed :)
Where do you expect us to go when the bombs fall?
-
K(arl) wrote:
I hate floating point operations
So do we. Any data type where "equality of values" is ill-defined is clearly half-baked. Someone should have put a more thought and less transistors into the matter.
The Grand Negus wrote:
Any data type where "equality of values" is ill-defined is clearly half-baked.
The degree of precision needed is a problem-specific variable. It doesn't matter what datatype you end up using to represent your numbers internally, if you haven't agreed on a consistent precision (and associated definition for equality) for you project, you are going to have problems. The same problems you'd have without any computers involved at all... People get into trouble using floating-point variables for the same reasons they get into trouble using integer variables or trying to out-run the police in their pickup trucks: incorrect assumptions about the capabilities of their tools.
-
<Using MFC>
double dValue = atof("0.1"); ASSERT(dValue == 0.1); double dSecondValue = (1 + dValue + dValue + dValue + dValue); ASSERT(dSecondValue == 1.4); // Crash
Where do you expect us to go when the bombs fall?
Me too. I much prefer fixed-point operations.... right up until the range of values exceeds the possible precision. Then i hate them even more than floating point ops...
-
<Using MFC>
double dValue = atof("0.1"); ASSERT(dValue == 0.1); double dSecondValue = (1 + dValue + dValue + dValue + dValue); ASSERT(dSecondValue == 1.4); // Crash
Where do you expect us to go when the bombs fall?
Floating point values stump the brightest. A week or two ago, I had an argument with an quite bright student, I only barely won :cool:
Developers, Developers, Developers, Developers, Developers, Developers, Velopers, Develprs, Developers!
We are a big screwed up dysfunctional psychotic happy family - some more screwed up, others more happy, but everybody's psychotic joint venture definition of CP
Linkify!|Fold With Us! -
Tim Smith wrote:
it will tell you that floating point addition is inherently more imprecise that floating point multiplication
It works only if one of the term of the multiplication is an integer. :~ I understand it's like incertitude calculation, for addition you sum absolute incertitudes, for multiplication you sum relative ones.
Where do you expect us to go when the bombs fall?
WTF are you talking about? Please read about floating addition and multiplication. http://en.wikipedia.org/wiki/Floating_point#Floating_point_arithmetic_operations[^] Even though both suffer from rounding problems, multiplication doesn't suffer from "cancellation or absorption problems". I have run into many instances where addition based algorithms had huge precision problems that were eliminated by recoding the software to be more multiplication based.
Tim Smith I'm going to patent thought. I have yet to see any prior art.
-
My macro can be of great help if you know where you put your foot. Eg when dealing with strict positive numbers set, or strict negative numbers set, without mixing the two. However the test case only works with 0xFF... values padded with 0, not like your 0xFFFFFEFF example. I think you wanted to say 0xFFFFFE00 which is correct :) Kochise PS : If I remember right, there is a 'magical trick' explained in a raticle on CP which explain how to cast double to float and the way back only using integer operations, and it works pretty well and fast, and also deals with the sign...
In Code we trust !
No, I said what I meant. I gave you two hex representations of two almost equal floating point numbers that your system fails to detect. I also pointed out a large number of other problems. Even if you just limit your algorithm to positive numbers (0x000000FF and 0x00000100 for example), you algorithm fails.
Tim Smith I'm going to patent thought. I have yet to see any prior art.
-
K(arl) wrote:
I hate floating point operations
So do we. Any data type where "equality of values" is ill-defined is clearly half-baked. Someone should have put a more thought and less transistors into the matter.
-
<Using MFC>
double dValue = atof("0.1"); ASSERT(dValue == 0.1); double dSecondValue = (1 + dValue + dValue + dValue + dValue); ASSERT(dSecondValue == 1.4); // Crash
Where do you expect us to go when the bombs fall?
Yeah - floating point numbers should be called "approximate" to remind us not to use them when we want exact figures.
'--8<------------------------ Ex Datis: Duncan Jones Merrion Computing Ltd
-
No, I said what I meant. I gave you two hex representations of two almost equal floating point numbers that your system fails to detect. I also pointed out a large number of other problems. Even if you just limit your algorithm to positive numbers (0x000000FF and 0x00000100 for example), you algorithm fails.
Tim Smith I'm going to patent thought. I have yet to see any prior art.
Oh, OK, what you provided were RAW floating point numbers (I'm used to see them in their RAW format, and it don't struck my eyes) :
0xFFFFFF00
0xFFFFFEFFI think my macro should work on them :
#define FCMP(x,y) (*((int*)&x)&0xFFFFF800)==(*((int*)&y)&0xFFFFF800)
float dSecondValue; *((int*)&dSecondValue) = 0xFFFFFF00; // RAW : 0xFFFFFF00
float dTest2; *((int*)&dTest2) = 0xFFFFFEFF; // RAW : 0xFFFFFEFF, last 11 bits are differents, so don't compare them -> 0xFFFFF800
ASSERT(FCMP(dSecondValue,dTest2)); // *NO* CrashI just tested, my macro works really well, even if what you provided are not numbers but QNAN. But let me tell you, WHAT THE F... my macro have to be useful on testing QNAN ? These are not numbers and should not be used ! You should throw an error instead, catch it in an ASSERT if you want, but from me to you, your example is just here to try to find a flaw in my trick, which only remains a trick, and show everybody how I'm bad in finding solutions. Bad move... Kochise
In Code we trust !
-
I may have oversimplified. The case was more like the following: double dTime = 0.; double dT = atof(<some value read in a file>); double dFinal = atof(<some value read in a file>); do{ ... dTime += dT; ... while(dTime < dFinal); A loop was missing because of the 'epsilon' induced by atof.
Where do you expect us to go when the bombs fall?
The way you put it now makes your irritation more understandable. But this is still something you'd find in a standard textbook covering floting point arithmetics. The way to solve the above problem would be to eliminate the series of additions
dTime += dT;
and instead have a loop variable that you multiply with dT:for (i=0; i<iterations; i++) dTime = i*dT;
-
<Using MFC>
double dValue = atof("0.1"); ASSERT(dValue == 0.1); double dSecondValue = (1 + dValue + dValue + dValue + dValue); ASSERT(dSecondValue == 1.4); // Crash
Where do you expect us to go when the bombs fall?
When I was writing a tax calculation app many years ago, I wrote a function called AlmostEqual that looked something like this: bool AlmostEqual(double nVal1, double nVal2, int nPrecision) { CString sVal1; CString sVal2; nPrecision = _min(16, nPrecision); sVal1.Format("%.*lf", nPrecision, nVal1); sVal2.Format("%.*lf", nPrecision, nVal2); return (sVal1 == sVal2); } We had a need to check for equality at different precisions depending on where in the calculation cycle we were. I'm sure you can come up with many other ways to do the same thing, but this was fast and worked very well.
"Why don't you tie a kerosene-soaked rag around your ankles so the ants won't climb up and eat your candy ass..." - Dale Earnhardt, 1997
-----
"...the staggering layers of obscenity in your statement make it a work of art on so many levels." - Jason Jystad, 10/26/2001 -
Chris Maunder wrote:
You may consider floating point storage a bad solution,
(1) We think it's ill-defined - "equal" should be a reasonable operator with any numeric data type. (2) We think it's limited in applicability, even in cases where one would think it would work - like money. (3) We think it's expensive - an entire second processor to do the job.
Chris Maunder wrote:
If your computations-using-fractions works for you then perfect.
They do, in many cases. In other cases, we used scaled integers (which you seem to have forgotten about). And we really believe that 64-bit scaled integers are a much better solution for most problems than floating point. (1) you can tell when they're equal; (2) you can use them everywhere, even for money; and (3) they don't require a separate processor. If that isn't enough for you, then I guess Occam is dead in more ways than one...
Chris Maunder wrote:
But I honestly do not think it's practical. Not for the things such as forecasting weather or perform amazing feats of engineering.
Regarding the weather. In our view, this problem, like the Traveling Salesman Problem, is not effectively solved using a computational approach. A school child with a tiny bit of training can beat the most robust weather-prediction system with just a glance at the maps from preceding days (and without real numbers at all); this problem is better solved using human-like techniques. Regarding "feats of engineering". Almost all of the early satellites were programmed in FORTH with scaled integers. Isn't a satellite a "feat of engineering"? And what modern skyscraper, submarine, or jet plane couldn't be build with 64-bit scaled integers?
Chris Maunder wrote:
No, but every time I go to the butchers I ask for 200g of sliced ham.
Probably force of habit. I, of course, say, "a pound" or "a half pound" or "a quarter pound". But I suspect you don't ever say, "192 grams" or "214 grams", illustrating the point that the unit of measure forced on you from your youth is much too specific for the job - too much accuracy is an inconvenience. Y'know, Chris, we expect a closed-minded, defensive posture from some of the others here, but I really thought we'd find a bit more understanding "at the top". Everybody knows floating point representation is
The Grand Negus wrote:
Probably force of habit. I, of course, say, "a pound" or "a half pound" or "a quarter pound". But I suspect you don't ever say, "192 grams" or "214 grams",
In Austria, they traditionally use the term "dekagram" or just "deka", (being 10 gams) to overcome this problem that a gram is awfully small. The same with lenght: We use centimeter or even decimeter, because its more convenient. And the rest is just education: Imperial measures ar by no means better. With your bread-example, anything smaller than "half a loaf" would be "%NUMBER% slices, please!" anyway.
"We trained hard, but it seemed that every time we were beginning to form up into teams we would be reorganised. I was to learn later in life that we tend to meet any new situation by reorganising: and a wonderful method it can be for creating the illusion of progress, while producing confusion, inefficiency and demoralisation." -- Caius Petronius, Roman Consul, 66 A.D.
-
Anyone who dares to equality-compare floating point values with literals probably doesn't have a understanding of basic computer architecture. :) /ravi
There used to be a warning message that stated 'equality comparisions between floating point values may not be meaningful.' or something like that. of course that was a long time ago on a Fortran compiler...;) Didn't this come up a month or so ago???
-
The way you put it now makes your irritation more understandable. But this is still something you'd find in a standard textbook covering floting point arithmetics. The way to solve the above problem would be to eliminate the series of additions
dTime += dT;
and instead have a loop variable that you multiply with dT:for (i=0; i<iterations; i++) dTime = i*dT;
-
WTF are you talking about? Please read about floating addition and multiplication. http://en.wikipedia.org/wiki/Floating_point#Floating_point_arithmetic_operations[^] Even though both suffer from rounding problems, multiplication doesn't suffer from "cancellation or absorption problems". I have run into many instances where addition based algorithms had huge precision problems that were eliminated by recoding the software to be more multiplication based.
Tim Smith I'm going to patent thought. I have yet to see any prior art.
Tim Smith wrote:
WTF are you talking about?
It was about rounding precision, but it doesn't matter, I don't seem able to make myslf understood in that thread :sigh:
Tim Smith wrote:
http://en.wikipedia.org/wiki/Floating\_point#Floating\_point\_arithmetic\_operations\[^\]
Thanks for the link.
Where do you expect us to go when the bombs fall?
-
Yeah - floating point numbers should be called "approximate" to remind us not to use them when we want exact figures.
'--8<------------------------ Ex Datis: Duncan Jones Merrion Computing Ltd
Ah, yes, "approximate" and "doubly approximate". Then maybe David St. Hubbins' girlfriend's statement "you should do it in doubly" might make some sense.
-
There used to be a warning message that stated 'equality comparisions between floating point values may not be meaningful.' or something like that. of course that was a long time ago on a Fortran compiler...;) Didn't this come up a month or so ago???
rollei35guy wrote:
that was a long time ago on a Fortran compiler
It can be put in the 'positive points about fortran' column... That makes two, with the efficient math libraries (matrix were well handled IIRC) :)
Where do you expect us to go when the bombs fall?
-
Floating point values stump the brightest. A week or two ago, I had an argument with an quite bright student, I only barely won :cool:
Developers, Developers, Developers, Developers, Developers, Developers, Velopers, Develprs, Developers!
We are a big screwed up dysfunctional psychotic happy family - some more screwed up, others more happy, but everybody's psychotic joint venture definition of CP
Linkify!|Fold With Us! -
rollei35guy wrote:
that was a long time ago on a Fortran compiler
It can be put in the 'positive points about fortran' column... That makes two, with the efficient math libraries (matrix were well handled IIRC) :)
Where do you expect us to go when the bombs fall?
K(arl) wrote:
It can be put in the 'positive points about fortran' column... That makes two, with the efficient math libraries (matrix were well handled IIRC)
:laugh: