0.2+0.1<>0.3 in For
-
I'm doing a program that runs a math algorithm, and so should be very accurate. However, I'm facing this problem: 0.2+0.1<>0.3! The code: For i = 0 To 100 Step 0.1 Stop Next After the 4th stop i should be 0.3, but it's 0.30000000000000004! Of course the difference is not big, but my program should be as accurate as possible. I have no idea what can cause this.:confused: Any explanation would be welcome.
The more you know the more you know how little you know, you know?
-
I'm doing a program that runs a math algorithm, and so should be very accurate. However, I'm facing this problem: 0.2+0.1<>0.3! The code: For i = 0 To 100 Step 0.1 Stop Next After the 4th stop i should be 0.3, but it's 0.30000000000000004! Of course the difference is not big, but my program should be as accurate as possible. I have no idea what can cause this.:confused: Any explanation would be welcome.
The more you know the more you know how little you know, you know?
The real $M@ wrote:
Any explanation would be welcome.
Short Version: decimal numbers <> binary numbers Longer Version: Wikipedia page on floating point numbers This is a common type of problem that is well known and documented all over the web. In your case, you may be able to iterate from 0 to 1000 instead and divide by 10 inside the loop.
-
I'm doing a program that runs a math algorithm, and so should be very accurate. However, I'm facing this problem: 0.2+0.1<>0.3! The code: For i = 0 To 100 Step 0.1 Stop Next After the 4th stop i should be 0.3, but it's 0.30000000000000004! Of course the difference is not big, but my program should be as accurate as possible. I have no idea what can cause this.:confused: Any explanation would be welcome.
The more you know the more you know how little you know, you know?
Computers represent everything in binary, not decimal, and everything has to be crammed into 32, 64, or 128 bits (Depending on the precision). In binary, the digits after the decimal correspond to 1/2, 1/4, 1/8, 1/16, etc., instead of 1/10, 1/100, 1/1000. So representing 0.3 in binary is kind of like, say, representing 10/3 or 22/7 in decimal... You can write it out to a lot of digits, but you can't get it exactly right in that form. Normally, this approximation gets hidden, but when you do arithmetic, these errors get compounded until it starts to become visible. There are thousands of articles on the web about this... Here's one: http://docs.sun.com/source/806-3568/ncg_goldberg.html[^]
Proud to have finally moved to the A-Ark. Which one are you in? Developer, Author (Guardians of Xen)
-
Computers represent everything in binary, not decimal, and everything has to be crammed into 32, 64, or 128 bits (Depending on the precision). In binary, the digits after the decimal correspond to 1/2, 1/4, 1/8, 1/16, etc., instead of 1/10, 1/100, 1/1000. So representing 0.3 in binary is kind of like, say, representing 10/3 or 22/7 in decimal... You can write it out to a lot of digits, but you can't get it exactly right in that form. Normally, this approximation gets hidden, but when you do arithmetic, these errors get compounded until it starts to become visible. There are thousands of articles on the web about this... Here's one: http://docs.sun.com/source/806-3568/ncg_goldberg.html[^]
Proud to have finally moved to the A-Ark. Which one are you in? Developer, Author (Guardians of Xen)
Thanks for both answers. I understand now ;)
The more you know the more you know how little you know, you know?