What's y'all's favorite way of dealing with floating point precision?
-
Sometimes I do wish JavaScript had better types like that. I can push a precision to about 9 or 10 in JavaScript before the storable value becomes too small to be worth while. Good enough for kiddie stuff at least. But yeah, also what Daniel said. :laugh:
Jeremy Falcon
Javascript's innate number type is but an approximation - everybody will agree. But the serialization of those numbers in JSON makes it worse. For example, what is really meant with 0.67, 0.667, ... , 0.66666666666667 ? Does 0.67 mean exactly 67 cents and is 0.66666666666667 to be understood as an approximation of 2/3? And does 0.667 also stand for 2/3? What does the number of decimals tell about the underlying intentions? JSON should have a standardized notation for lossless serialization of floating point type numbers. Even C has that: the %a printf format for double in hexadecimal notation. And talking about shortcomings in JSON: please also provide a notation for bignums - both integer and rational ones.
-
Not sure if this counts as a programming question, since I'm not asking for code but rather preference. I'm in a project that requires complete accuracy on numbers. So, given the following... We all know the famous of examples of stuff like this:
0.1 + 0.2 // 0.30000000000000004
Up until now, I've been content with rounding off any operations after the fact and calling it a day, as close enough was good enough. For applications, say that deal with currency, the age old trick is to just use integers based on a cent value. So, a `$1.23` would be stored as `123` in a variable. Sweet, but, consider this:
// $123.45 / $2.25
12345 / 225 // 54.86666666666667If I move along powers of the base, I never run into issues. But for your typical run of the mill calculations, even with integers, you still have to deal with fractional floating points in the arithmetic. So, I've been using integers _and_ rounding off any calculations to their nearest integer value. Maybe sometimes I'll `floor` or `ceil` depending on context, but that's been my current solution, which is a lot more accurate but not 100% accurate. But, good enough-ish. Soooo.... 1) You guys prefer using a library to handle stuff like this? IMO I don't use one for arithmetic because most libraries for this (at least in JavaScript) are clunky and slow and don't really do a better job anyway. 2) You think integers and rounding is also the way to go? Keeps crap simple and all that, despite needing to remember to always round after division calculations or calculations against fractional types. 3) Never do arithmetic? Tell the user to go home.
Jeremy Falcon
When seeking consistency, I would wrap all of that in a utility class. This way I know it's consistent and works the same everywhere. If I need something a little different, then I either overload or add a default parameter as the occasion requires. I agree that storing the numerator and denominator would be the best way to prevent most headaches. In C#, I would use a Fraction struct for this (home grown if one doesn't exist already). Only collapse the fraction to a primitive type as necessary. This also has the benefit of letting you use money with a decimal value, so you don't have to do the extra math to get the cents back. Speaking of money, you only have to store 4 decimals with money to be accurate for accounting purposes. I deal with fiduciary escrow accounts for my job, and that's all we've every used. Never had a problem being out of balance by a penny in 24 years. However, we don't do multiplication and division on the money. I don't think that would change much though as long as you kept the division and rounding to the end of the math problem.
Bond Keep all things as simple as possible, but no simpler. -said someone, somewhere
-
Not sure if this counts as a programming question, since I'm not asking for code but rather preference. I'm in a project that requires complete accuracy on numbers. So, given the following... We all know the famous of examples of stuff like this:
0.1 + 0.2 // 0.30000000000000004
Up until now, I've been content with rounding off any operations after the fact and calling it a day, as close enough was good enough. For applications, say that deal with currency, the age old trick is to just use integers based on a cent value. So, a `$1.23` would be stored as `123` in a variable. Sweet, but, consider this:
// $123.45 / $2.25
12345 / 225 // 54.86666666666667If I move along powers of the base, I never run into issues. But for your typical run of the mill calculations, even with integers, you still have to deal with fractional floating points in the arithmetic. So, I've been using integers _and_ rounding off any calculations to their nearest integer value. Maybe sometimes I'll `floor` or `ceil` depending on context, but that's been my current solution, which is a lot more accurate but not 100% accurate. But, good enough-ish. Soooo.... 1) You guys prefer using a library to handle stuff like this? IMO I don't use one for arithmetic because most libraries for this (at least in JavaScript) are clunky and slow and don't really do a better job anyway. 2) You think integers and rounding is also the way to go? Keeps crap simple and all that, despite needing to remember to always round after division calculations or calculations against fractional types. 3) Never do arithmetic? Tell the user to go home.
Jeremy Falcon
One thing to consider, there is a worldwide standard that most operating systems follow, IEEE 64-BIT. To me it wouldn't be unreasonable to follow that. Anyone situation that requires more would be highly specialized.
-
Not sure if this counts as a programming question, since I'm not asking for code but rather preference. I'm in a project that requires complete accuracy on numbers. So, given the following... We all know the famous of examples of stuff like this:
0.1 + 0.2 // 0.30000000000000004
Up until now, I've been content with rounding off any operations after the fact and calling it a day, as close enough was good enough. For applications, say that deal with currency, the age old trick is to just use integers based on a cent value. So, a `$1.23` would be stored as `123` in a variable. Sweet, but, consider this:
// $123.45 / $2.25
12345 / 225 // 54.86666666666667If I move along powers of the base, I never run into issues. But for your typical run of the mill calculations, even with integers, you still have to deal with fractional floating points in the arithmetic. So, I've been using integers _and_ rounding off any calculations to their nearest integer value. Maybe sometimes I'll `floor` or `ceil` depending on context, but that's been my current solution, which is a lot more accurate but not 100% accurate. But, good enough-ish. Soooo.... 1) You guys prefer using a library to handle stuff like this? IMO I don't use one for arithmetic because most libraries for this (at least in JavaScript) are clunky and slow and don't really do a better job anyway. 2) You think integers and rounding is also the way to go? Keeps crap simple and all that, despite needing to remember to always round after division calculations or calculations against fractional types. 3) Never do arithmetic? Tell the user to go home.
Jeremy Falcon
Financial apps as far as I know use four decimal digits during calculations to avoid problems with only two, then round at the end.
-
Not sure if this counts as a programming question, since I'm not asking for code but rather preference. I'm in a project that requires complete accuracy on numbers. So, given the following... We all know the famous of examples of stuff like this:
0.1 + 0.2 // 0.30000000000000004
Up until now, I've been content with rounding off any operations after the fact and calling it a day, as close enough was good enough. For applications, say that deal with currency, the age old trick is to just use integers based on a cent value. So, a `$1.23` would be stored as `123` in a variable. Sweet, but, consider this:
// $123.45 / $2.25
12345 / 225 // 54.86666666666667If I move along powers of the base, I never run into issues. But for your typical run of the mill calculations, even with integers, you still have to deal with fractional floating points in the arithmetic. So, I've been using integers _and_ rounding off any calculations to their nearest integer value. Maybe sometimes I'll `floor` or `ceil` depending on context, but that's been my current solution, which is a lot more accurate but not 100% accurate. But, good enough-ish. Soooo.... 1) You guys prefer using a library to handle stuff like this? IMO I don't use one for arithmetic because most libraries for this (at least in JavaScript) are clunky and slow and don't really do a better job anyway. 2) You think integers and rounding is also the way to go? Keeps crap simple and all that, despite needing to remember to always round after division calculations or calculations against fractional types. 3) Never do arithmetic? Tell the user to go home.
Jeremy Falcon
"project that requires complete accuracy on numbers" A couple of thoughts. Is not the above requirement impossible on a binary system? By the very definition, you are going to lose precision be it float, double, double double.... how far do you want to go? For me, I work a lot in machine HMIs. Some users want metric, others want English. I've always had a requirement to allow the user to switch between units and maintaining what is displayed. For example, 1" is 25.4 mm. If I switch between metric and English, the value must be consistent. As for complete accuracy - this for me has always fallen into fixed point arithmetic to avoid rounding errors. COBOL has been mentioned. I've done COBOL - a very long time ago, but as I recall, it did fixed point arithmetic very well. Or I might be missing something... Please elaborate on what you mean for "complete accuracy"? This sounds like a requirement from someone who really does not understand their request - sort of like a rare steak, but the temp should be 175F....
Charlie Gilley “They who can give up essential liberty to obtain a little temporary safety deserve neither liberty nor safety.” BF, 1759 Has never been more appropriate.
-
Not sure if this counts as a programming question, since I'm not asking for code but rather preference. I'm in a project that requires complete accuracy on numbers. So, given the following... We all know the famous of examples of stuff like this:
0.1 + 0.2 // 0.30000000000000004
Up until now, I've been content with rounding off any operations after the fact and calling it a day, as close enough was good enough. For applications, say that deal with currency, the age old trick is to just use integers based on a cent value. So, a `$1.23` would be stored as `123` in a variable. Sweet, but, consider this:
// $123.45 / $2.25
12345 / 225 // 54.86666666666667If I move along powers of the base, I never run into issues. But for your typical run of the mill calculations, even with integers, you still have to deal with fractional floating points in the arithmetic. So, I've been using integers _and_ rounding off any calculations to their nearest integer value. Maybe sometimes I'll `floor` or `ceil` depending on context, but that's been my current solution, which is a lot more accurate but not 100% accurate. But, good enough-ish. Soooo.... 1) You guys prefer using a library to handle stuff like this? IMO I don't use one for arithmetic because most libraries for this (at least in JavaScript) are clunky and slow and don't really do a better job anyway. 2) You think integers and rounding is also the way to go? Keeps crap simple and all that, despite needing to remember to always round after division calculations or calculations against fractional types. 3) Never do arithmetic? Tell the user to go home.
Jeremy Falcon
you could do this:
decimal result = decimal(1.0) * decimal(2.0); // answer is "0.3"
or this:
double resulta = Math.Round(x + y, 5); // 5 indicates precision (answer is "0.3")
".45 ACP - because shooting twice is just silly" - JSOP, 2010
-----
You can never have too much ammo - unless you're swimming, or on fire. - JSOP, 2010
-----
When you pry the gun from my cold dead hands, be careful - the barrel will be very hot. - JSOP, 2013 -
Not sure if this counts as a programming question, since I'm not asking for code but rather preference. I'm in a project that requires complete accuracy on numbers. So, given the following... We all know the famous of examples of stuff like this:
0.1 + 0.2 // 0.30000000000000004
Up until now, I've been content with rounding off any operations after the fact and calling it a day, as close enough was good enough. For applications, say that deal with currency, the age old trick is to just use integers based on a cent value. So, a `$1.23` would be stored as `123` in a variable. Sweet, but, consider this:
// $123.45 / $2.25
12345 / 225 // 54.86666666666667If I move along powers of the base, I never run into issues. But for your typical run of the mill calculations, even with integers, you still have to deal with fractional floating points in the arithmetic. So, I've been using integers _and_ rounding off any calculations to their nearest integer value. Maybe sometimes I'll `floor` or `ceil` depending on context, but that's been my current solution, which is a lot more accurate but not 100% accurate. But, good enough-ish. Soooo.... 1) You guys prefer using a library to handle stuff like this? IMO I don't use one for arithmetic because most libraries for this (at least in JavaScript) are clunky and slow and don't really do a better job anyway. 2) You think integers and rounding is also the way to go? Keeps crap simple and all that, despite needing to remember to always round after division calculations or calculations against fractional types. 3) Never do arithmetic? Tell the user to go home.
Jeremy Falcon
It will largely depend on your application and requirements. For currency type applications, consider using a BCD (Binary Code Decimal, used in COBOL) package. See below for references. For integer type applications, there are a few "large" int packages. For scientific applications, there are a number of packages for large number processing. ************ BCD references ******************** https://web.archive.org/web/20081102170717/http://webster.cs.ucr.edu/AoA/Windows/HTML/AdvancedArithmetica6.html#1000255 https://handwiki.org/wiki/Binary-coded\_decimal#EBCDIC\_zoned\_decimal\_conversion\_table Notes: 1) BCD numbers can be packed (2 digits/byte) or unpacked (1 digit per byte) 2) The low order byte (right most) of packed is nnnnssss where nnnn is the low order digit and ssss is the sign (0x0D for negative, 0x0F for positive) 3) The spec is (www,ddd) where www is the total bytes and ddd is the digits to right of decimal point. E.g.: 5,2 is a 5 digit number with 2 digits to the right of the decimal point--"123.45" This field would require 3 bytes packed, 6 bytes unpacked. 4) From IBM: For a field or array element of length N, if the PACKEVEN keyword is not specified, the number of digits is 2N - 1; if the PACKEVEN keyword is specified, the number of digits is 2(N-1). 5) Some documentation refers to BCD as DECIMAL but others use DECIMAL to refer to floating point. ********************* For large int ******************** Microsoft SafeInt package SafeInt Class | Microsoft Learn[^] The decNumber package can handle decimal integer number of user defined precision GitHub - dnotq/decNumber: Decimal Floating Point decNumber C Library by IBM Fellow Mike Cowlishaw[^] (I have not yet used or investigate the cran project.) CRAN - Package VeryLargeIntegers[^] ******************** For Floating Point ******************** Floating point gets very complex and confusing because there has never been a really good, consistent standard
-
There is a remove method for the last single element. It's called [Array.prototype.pop](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global\_Objects/Array/pop). If you want to remove chunks of an array at a time, as you mentioned there's [Array.prototype.splice](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global\_Objects/Array/splice). Not sure why the no bueno, just because it's called `splice` rather than `remove`.
Jeremy Falcon
Those aren't remove functions. You forgot shift, which removes the first element. However, I very rarely need to remove the first or last element specifically. If I have a reference to an element I don't even know its index, I just want to be able to remove it. It's just JavaScripts way of saying "it's an array, but you can abuse it as stack or queue." Splice is also something different entirely. You need an index and the number of items you want to remove starting at that index. I can never remember it:
array.splice(array.indexOf(something), 1);
. It can also be used to add new elements at the designated index, so clearly not a remove. The slice methods just returns a portion of the array between the specified indexes, and it sounds too much like splice to be able to remember clearly which is which. Someone new to JavaScript would never guess what it does or how to use it. I just want to sayarray.remove(something);
and it should remove something. A remove function is easy to remember, easy to use and clearly conveys your intent. I don't care if it just doesarray.splice(array.indexOf(something), 1);
internally, I just want to be rid of that awful syntax. Everyone who says "splice is JavaScripts remove function" is dead wrong.Best, Sander Azure DevOps Succinctly (free eBook) Azure Serverless Succinctly (free eBook) Migrating Apps to the Cloud with Azure arrgh.js - Bringing LINQ to JavaScript
-
Those aren't remove functions. You forgot shift, which removes the first element. However, I very rarely need to remove the first or last element specifically. If I have a reference to an element I don't even know its index, I just want to be able to remove it. It's just JavaScripts way of saying "it's an array, but you can abuse it as stack or queue." Splice is also something different entirely. You need an index and the number of items you want to remove starting at that index. I can never remember it:
array.splice(array.indexOf(something), 1);
. It can also be used to add new elements at the designated index, so clearly not a remove. The slice methods just returns a portion of the array between the specified indexes, and it sounds too much like splice to be able to remember clearly which is which. Someone new to JavaScript would never guess what it does or how to use it. I just want to sayarray.remove(something);
and it should remove something. A remove function is easy to remember, easy to use and clearly conveys your intent. I don't care if it just doesarray.splice(array.indexOf(something), 1);
internally, I just want to be rid of that awful syntax. Everyone who says "splice is JavaScripts remove function" is dead wrong.Best, Sander Azure DevOps Succinctly (free eBook) Azure Serverless Succinctly (free eBook) Migrating Apps to the Cloud with Azure arrgh.js - Bringing LINQ to JavaScript
Sander Rossel wrote:
You forgot shift, which removes the first element.
Didn't forget it man. It's an Internet post... not a book. I was giving a few examples. Do better than the kiddie crap on CP.
Sander Rossel wrote:
I just want to say array.remove(something); and it should remove something.
Well that's enough reason to hate an entire language. #sarcasm
Sander Rossel wrote:
Everyone who says "splice is JavaScripts remove function" is dead wrong.
Well, what I find with programmers, they lack enough maturity to not be overly emotional about crap. And they love to hate to feel intelligent or superior (usually the opposite). I'm going to give you three examples that don't require much thought, you'll probably not change your mind at all but it doesn't mean I'm wrong... bias and hate by non-experts is bias and hate after all...
// given this
const data = [1, 2, 3, 4, 5];// method 1, this is where you complain it takes two calls
delete data[2];
console.log(data.filter(x => x));// method 2, this is where you complain it's not called "remove"
console.log(data.filter((x, i) => x !== 2));// original method you didn't even bother to try, this mutates
// let me guess, never read the documentation on it?
data.splice(2, 1);
console.log(data);Also, keep in mind, C# doesn't have a strong sense of immutability, like JavaScript does. Not that you can't mutate, as given in the examples... I suggest you try running that code before perpetuating the unfounded hate.
Jeremy Falcon
-
Those aren't remove functions. You forgot shift, which removes the first element. However, I very rarely need to remove the first or last element specifically. If I have a reference to an element I don't even know its index, I just want to be able to remove it. It's just JavaScripts way of saying "it's an array, but you can abuse it as stack or queue." Splice is also something different entirely. You need an index and the number of items you want to remove starting at that index. I can never remember it:
array.splice(array.indexOf(something), 1);
. It can also be used to add new elements at the designated index, so clearly not a remove. The slice methods just returns a portion of the array between the specified indexes, and it sounds too much like splice to be able to remember clearly which is which. Someone new to JavaScript would never guess what it does or how to use it. I just want to sayarray.remove(something);
and it should remove something. A remove function is easy to remember, easy to use and clearly conveys your intent. I don't care if it just doesarray.splice(array.indexOf(something), 1);
internally, I just want to be rid of that awful syntax. Everyone who says "splice is JavaScripts remove function" is dead wrong.Best, Sander Azure DevOps Succinctly (free eBook) Azure Serverless Succinctly (free eBook) Migrating Apps to the Cloud with Azure arrgh.js - Bringing LINQ to JavaScript
Also, this is why I post less and less on CP. I can't post anything without someone coming along who knows little about the language and give their hateful opinion on JavaScript. It's a waste of time man, to repeat the same conversation over and over again for years.
Jeremy Falcon
-
When seeking consistency, I would wrap all of that in a utility class. This way I know it's consistent and works the same everywhere. If I need something a little different, then I either overload or add a default parameter as the occasion requires. I agree that storing the numerator and denominator would be the best way to prevent most headaches. In C#, I would use a Fraction struct for this (home grown if one doesn't exist already). Only collapse the fraction to a primitive type as necessary. This also has the benefit of letting you use money with a decimal value, so you don't have to do the extra math to get the cents back. Speaking of money, you only have to store 4 decimals with money to be accurate for accounting purposes. I deal with fiduciary escrow accounts for my job, and that's all we've every used. Never had a problem being out of balance by a penny in 24 years. However, we don't do multiplication and division on the money. I don't think that would change much though as long as you kept the division and rounding to the end of the math problem.
Bond Keep all things as simple as possible, but no simpler. -said someone, somewhere
Matt Bond wrote:
When seeking consistency, I would wrap all of that in a utility class. This way I know it's consistent and works the same everywhere. If I need something a little different, then I either overload or add a default parameter as the occasion requires.
Ultimately, that's what I did. Except they were utility functions because I'm more functional than oop. Same concept though.
Matt Bond wrote:
I agree that storing the numerator and denominator would be the best way to prevent most headaches. In C#, I would use a Fraction struct for this (home grown if one doesn't exist already). Only collapse the fraction to a primitive type as necessary. This also has the benefit of letting you use money with a decimal value, so you don't have to do the extra math to get the cents back.
Yeah, it's an awesome idea. A pretty cool piece of code was posted earlier for rational numbers. My only concern with a language like JavaScript is the speed of that. In something like C/C++ I wouldn't think twice about using it.
Matt Bond wrote:
Speaking of money, you only have to store 4 decimals with money to be accurate for accounting purposes.
Whoops. I was storing 2. Thanks for this. :laugh:
Matt Bond wrote:
However, we don't do multiplication and division on the money.
How do you do arithmetic on it then? Like, to calculate interest then? Thanks for the reply btw.
Jeremy Falcon
-
One thing to consider, there is a worldwide standard that most operating systems follow, IEEE 64-BIT. To me it wouldn't be unreasonable to follow that. Anyone situation that requires more would be highly specialized.
I'll check it out. Thanks.
Jeremy Falcon
-
Financial apps as far as I know use four decimal digits during calculations to avoid problems with only two, then round at the end.
Yeah, someone else just posed this. I'm gonna do the same then. Thanks man.
Jeremy Falcon
-
"project that requires complete accuracy on numbers" A couple of thoughts. Is not the above requirement impossible on a binary system? By the very definition, you are going to lose precision be it float, double, double double.... how far do you want to go? For me, I work a lot in machine HMIs. Some users want metric, others want English. I've always had a requirement to allow the user to switch between units and maintaining what is displayed. For example, 1" is 25.4 mm. If I switch between metric and English, the value must be consistent. As for complete accuracy - this for me has always fallen into fixed point arithmetic to avoid rounding errors. COBOL has been mentioned. I've done COBOL - a very long time ago, but as I recall, it did fixed point arithmetic very well. Or I might be missing something... Please elaborate on what you mean for "complete accuracy"? This sounds like a requirement from someone who really does not understand their request - sort of like a rare steak, but the temp should be 175F....
Charlie Gilley “They who can give up essential liberty to obtain a little temporary safety deserve neither liberty nor safety.” BF, 1759 Has never been more appropriate.
You're overthinking it man. The numbers need to be correct, as verifiable by secondary or tertiary means.
Jeremy Falcon
-
you could do this:
decimal result = decimal(1.0) * decimal(2.0); // answer is "0.3"
or this:
double resulta = Math.Round(x + y, 5); // 5 indicates precision (answer is "0.3")
".45 ACP - because shooting twice is just silly" - JSOP, 2010
-----
You can never have too much ammo - unless you're swimming, or on fire. - JSOP, 2010
-----
When you pry the gun from my cold dead hands, be careful - the barrel will be very hot. - JSOP, 2013Rounding with every calculation is what I was doing. I decided to move to just using integers and cents. That's for the reply though.
Jeremy Falcon
-
It will largely depend on your application and requirements. For currency type applications, consider using a BCD (Binary Code Decimal, used in COBOL) package. See below for references. For integer type applications, there are a few "large" int packages. For scientific applications, there are a number of packages for large number processing. ************ BCD references ******************** https://web.archive.org/web/20081102170717/http://webster.cs.ucr.edu/AoA/Windows/HTML/AdvancedArithmetica6.html#1000255 https://handwiki.org/wiki/Binary-coded\_decimal#EBCDIC\_zoned\_decimal\_conversion\_table Notes: 1) BCD numbers can be packed (2 digits/byte) or unpacked (1 digit per byte) 2) The low order byte (right most) of packed is nnnnssss where nnnn is the low order digit and ssss is the sign (0x0D for negative, 0x0F for positive) 3) The spec is (www,ddd) where www is the total bytes and ddd is the digits to right of decimal point. E.g.: 5,2 is a 5 digit number with 2 digits to the right of the decimal point--"123.45" This field would require 3 bytes packed, 6 bytes unpacked. 4) From IBM: For a field or array element of length N, if the PACKEVEN keyword is not specified, the number of digits is 2N - 1; if the PACKEVEN keyword is specified, the number of digits is 2(N-1). 5) Some documentation refers to BCD as DECIMAL but others use DECIMAL to refer to floating point. ********************* For large int ******************** Microsoft SafeInt package SafeInt Class | Microsoft Learn[^] The decNumber package can handle decimal integer number of user defined precision GitHub - dnotq/decNumber: Decimal Floating Point decNumber C Library by IBM Fellow Mike Cowlishaw[^] (I have not yet used or investigate the cran project.) CRAN - Package VeryLargeIntegers[^] ******************** For Floating Point ******************** Floating point gets very complex and confusing because there has never been a really good, consistent standard
Thanks for this. I should probably say, for my use case in particular, I'm in a Node project. But, it's cool to know this libs exist. Granted, I could make a C/C++ module and use that within Node, but for this project at least I'm trying to keep it zippy since JavaScript isn't as fast as C/C++.
Jeremy Falcon
-
.toFixed(x) should do the trick. And maybe even
+(0.1 + 0.2).toFixed(2)
, which displays 0.3 just fine X| A user never wants to see more than three digits anyway. But ultimately, I do all my calculations in C# that has a decent decimal type. I once had a customer who wanted to calculate VAT for each sales order row and then got mad the total didn't add up due to rounding errors :sigh:Best, Sander Azure DevOps Succinctly (free eBook) Azure Serverless Succinctly (free eBook) Migrating Apps to the Cloud with Azure arrgh.js - Bringing LINQ to JavaScript
Had a similar problem before… It was a percentage breakout like VAT where they wanted it to always total to 100%. The customer demand was that you sort by original value (smallest to largest), calculate the first n-1 percentage per line item, last/biggest nth item is force set as 100-(sum (n-1 line item percentages) ) That way when you exported the values to Excel, it all balanced. Just hope the customer does not add formulas back in and notice that the biggest row is off by +/-0.01 in the calculation.
-
Not sure if this counts as a programming question, since I'm not asking for code but rather preference. I'm in a project that requires complete accuracy on numbers. So, given the following... We all know the famous of examples of stuff like this:
0.1 + 0.2 // 0.30000000000000004
Up until now, I've been content with rounding off any operations after the fact and calling it a day, as close enough was good enough. For applications, say that deal with currency, the age old trick is to just use integers based on a cent value. So, a `$1.23` would be stored as `123` in a variable. Sweet, but, consider this:
// $123.45 / $2.25
12345 / 225 // 54.86666666666667If I move along powers of the base, I never run into issues. But for your typical run of the mill calculations, even with integers, you still have to deal with fractional floating points in the arithmetic. So, I've been using integers _and_ rounding off any calculations to their nearest integer value. Maybe sometimes I'll `floor` or `ceil` depending on context, but that's been my current solution, which is a lot more accurate but not 100% accurate. But, good enough-ish. Soooo.... 1) You guys prefer using a library to handle stuff like this? IMO I don't use one for arithmetic because most libraries for this (at least in JavaScript) are clunky and slow and don't really do a better job anyway. 2) You think integers and rounding is also the way to go? Keeps crap simple and all that, despite needing to remember to always round after division calculations or calculations against fractional types. 3) Never do arithmetic? Tell the user to go home.
Jeremy Falcon
After decades of writing software for industrial, medical, financial and LoB (Line of Business) applications, I found that the following guidelines work: 1) Financial and money, I always use the decimal type for currency, and the smallest sized type that affords me the precision I need. So why would I use any other type in a currency/money app? Simple example: I'm writing a trading application, where the strike price will be stored in a decimal type, and the number of shares will be store in a float type. Why not use a decimal type for the number of shares? Because there's no guarantee that it will be 3 places to the right of the decimal (that's typical, but not a hard fast rule). I chose float because its the smallest type that offers the precision I seek. By smallest I mean that a double is typically twice the size of a float. For those that are tempted to respond that floats are 64 bits, and doubles are 128 bit, not necessarily. That's a very PC centric view. Note: These guidelines typically, but not always, apply to LoB 2) For medical and industrial, which usually require floating point precision to store values that may not be the same as the formatting to the display, I use floats and doubles, using the smallest type that affords the precision required by the application under development. What do I mean by the smallest type and precision? The size of the type refers to how large does the floating point type have to be in order to maintain the level of precision (the number of places after the decimal point) while not losing appreciable loss to rounding and implicit conversions (more on that below). Caveats: There are several other considerations when choosing and writing floating point code. A) Rounding loss: This refers to how precise a resulting value is after some operation is performed on it. This is not limited to mathematical operations only (multiplication, division), this also applies to any library calls used to generate a new value e.g. sqrt(...). B) Conversions: Be very very careful about mixing types i.e. decimal, float and double. When a smaller type is promoted to a larger type, it may introduce random "precision" that actually makes the new value deviate farther from the mean i.e. the new value strays farther from representing the true value. So for example:
float pi = 3.1415927;
float radius = 5.2;
double circumference = 2.0f -
After decades of writing software for industrial, medical, financial and LoB (Line of Business) applications, I found that the following guidelines work: 1) Financial and money, I always use the decimal type for currency, and the smallest sized type that affords me the precision I need. So why would I use any other type in a currency/money app? Simple example: I'm writing a trading application, where the strike price will be stored in a decimal type, and the number of shares will be store in a float type. Why not use a decimal type for the number of shares? Because there's no guarantee that it will be 3 places to the right of the decimal (that's typical, but not a hard fast rule). I chose float because its the smallest type that offers the precision I seek. By smallest I mean that a double is typically twice the size of a float. For those that are tempted to respond that floats are 64 bits, and doubles are 128 bit, not necessarily. That's a very PC centric view. Note: These guidelines typically, but not always, apply to LoB 2) For medical and industrial, which usually require floating point precision to store values that may not be the same as the formatting to the display, I use floats and doubles, using the smallest type that affords the precision required by the application under development. What do I mean by the smallest type and precision? The size of the type refers to how large does the floating point type have to be in order to maintain the level of precision (the number of places after the decimal point) while not losing appreciable loss to rounding and implicit conversions (more on that below). Caveats: There are several other considerations when choosing and writing floating point code. A) Rounding loss: This refers to how precise a resulting value is after some operation is performed on it. This is not limited to mathematical operations only (multiplication, division), this also applies to any library calls used to generate a new value e.g. sqrt(...). B) Conversions: Be very very careful about mixing types i.e. decimal, float and double. When a smaller type is promoted to a larger type, it may introduce random "precision" that actually makes the new value deviate farther from the mean i.e. the new value strays farther from representing the true value. So for example:
float pi = 3.1415927;
float radius = 5.2;
double circumference = 2.0fThanks for the reply Stacy. These are all great points. For this project, I'm in JavaScript/TypeScript and dealing with money. So there is no decimal type. But, after this chat I decided to just add two extra decimal places of resolution. So, I'll store a currency amount as 1.1234 and only round it off to 2 during reporting.
Stacy Dudovitz wrote:
Conversions: Be very very careful about mixing types i.e. decimal, float and double.
Tru dat. Not sure about C#, but in JavaScript/TypeScript I only have one level of precision from a data type. As a bonus though, there is a cool way to help to avoid mixing faux types.
// the TYPE member is not used, only to flag a delta
export type Distinct = T & {
__TYPE__: DistinctName
};// you cannot mix these two without explicit conversion
export type NumericTypeOne = Distinct;
export type NumericTypeTwo = Distinct;Stacy Dudovitz wrote:
Implicit Operators in C#: How To Simplify Type Conversions
If I'm ever in C/C++ land again I'll check it out. Thanks.
Jeremy Falcon
-
Rounding with every calculation is what I was doing. I decided to move to just using integers and cents. That's for the reply though.
Jeremy Falcon
If you're using c# you can use decimal types and cast if/when you need to go back to floats/doubles.
".45 ACP - because shooting twice is just silly" - JSOP, 2010
-----
You can never have too much ammo - unless you're swimming, or on fire. - JSOP, 2010
-----
When you pry the gun from my cold dead hands, be careful - the barrel will be very hot. - JSOP, 2013