Double Trouble
-
I came across this bug today in a report output handler. The application processes property tax data and frequently outputs to delimited text or xls for processing by other apps. It was reported that in a particular output file a dollar amount was one penny less than expected. In the below code, "dr" is a DataRow that is passed to the method and "COL_Check_Amount" is a string constant for the column name in the row. The bug was found when the field in the datarow had the value 4111.86: ((int)(double.Parse(dr[COL_Check_Amount].ToString()) * 100)).ToString().PadLeft(11, '0') If you put this code in a test app and replace dr[COL_Check_Amount] with "4111.86" you find that the resulting INT is 411185. The problem here is that there is a small loss in precision when the DOUBLE is multiplied by 100; the resulting value is 411185.99999999994. Oddly, using FLOAT does not produce the same problem. We resolved the issue by using DECIMAL rather than DOUBLE as is our common practice with monetary values. I'm not sure why DOUBLE could not maintain the precision in what appeared to be a simple calculation.
-
I came across this bug today in a report output handler. The application processes property tax data and frequently outputs to delimited text or xls for processing by other apps. It was reported that in a particular output file a dollar amount was one penny less than expected. In the below code, "dr" is a DataRow that is passed to the method and "COL_Check_Amount" is a string constant for the column name in the row. The bug was found when the field in the datarow had the value 4111.86: ((int)(double.Parse(dr[COL_Check_Amount].ToString()) * 100)).ToString().PadLeft(11, '0') If you put this code in a test app and replace dr[COL_Check_Amount] with "4111.86" you find that the resulting INT is 411185. The problem here is that there is a small loss in precision when the DOUBLE is multiplied by 100; the resulting value is 411185.99999999994. Oddly, using FLOAT does not produce the same problem. We resolved the issue by using DECIMAL rather than DOUBLE as is our common practice with monetary values. I'm not sure why DOUBLE could not maintain the precision in what appeared to be a simple calculation.
Floating point numbers are stored as base 2 not base 10, which means that they have a different set of infinite repeating decimals. .1 is a base2 infinite repeater. If float didn't have a problem you were probably lucky and had it round up and the error truncate away. Decimal is a base 10 representation, and consequently much slower than native floating types. It does however behave exactly like the user naive expectation.