Skip to content
  • Categories
  • Recent
  • Tags
  • Popular
  • World
  • Users
  • Groups
Skins
  • Light
  • Cerulean
  • Cosmo
  • Flatly
  • Journal
  • Litera
  • Lumen
  • Lux
  • Materia
  • Minty
  • Morph
  • Pulse
  • Sandstone
  • Simplex
  • Sketchy
  • Spacelab
  • United
  • Yeti
  • Zephyr
  • Dark
  • Cyborg
  • Darkly
  • Quartz
  • Slate
  • Solar
  • Superhero
  • Vapor

  • Default (No Skin)
  • No Skin
Collapse
Code Project
  1. Home
  2. General Programming
  3. Visual Basic
  4. 0.2+0.1<>0.3 in For

0.2+0.1<>0.3 in For

Scheduled Pinned Locked Moved Visual Basic
algorithmshelpquestionlearning
4 Posts 3 Posters 0 Views 1 Watching
  • Oldest to Newest
  • Newest to Oldest
  • Most Votes
Reply
  • Reply as topic
Log in to reply
This topic has been deleted. Only users with topic management privileges can see it.
  • T Offline
    T Offline
    The real M
    wrote on last edited by
    #1

    I'm doing a program that runs a math algorithm, and so should be very accurate. However, I'm facing this problem: 0.2+0.1<>0.3! The code: For i = 0 To 100 Step 0.1 Stop Next After the 4th stop i should be 0.3, but it's 0.30000000000000004! Of course the difference is not big, but my program should be as accurate as possible. I have no idea what can cause this.:confused: Any explanation would be welcome.

    The more you know the more you know how little you know, you know?

    G I 2 Replies Last reply
    0
    • T The real M

      I'm doing a program that runs a math algorithm, and so should be very accurate. However, I'm facing this problem: 0.2+0.1<>0.3! The code: For i = 0 To 100 Step 0.1 Stop Next After the 4th stop i should be 0.3, but it's 0.30000000000000004! Of course the difference is not big, but my program should be as accurate as possible. I have no idea what can cause this.:confused: Any explanation would be welcome.

      The more you know the more you know how little you know, you know?

      G Offline
      G Offline
      Gideon Engelberth
      wrote on last edited by
      #2

      The real $M@ wrote:

      Any explanation would be welcome.

      Short Version: decimal numbers <> binary numbers Longer Version: Wikipedia page on floating point numbers This is a common type of problem that is well known and documented all over the web. In your case, you may be able to iterate from 0 to 1000 instead and divide by 10 inside the loop.

      1 Reply Last reply
      0
      • T The real M

        I'm doing a program that runs a math algorithm, and so should be very accurate. However, I'm facing this problem: 0.2+0.1<>0.3! The code: For i = 0 To 100 Step 0.1 Stop Next After the 4th stop i should be 0.3, but it's 0.30000000000000004! Of course the difference is not big, but my program should be as accurate as possible. I have no idea what can cause this.:confused: Any explanation would be welcome.

        The more you know the more you know how little you know, you know?

        I Offline
        I Offline
        Ian Shlasko
        wrote on last edited by
        #3

        Computers represent everything in binary, not decimal, and everything has to be crammed into 32, 64, or 128 bits (Depending on the precision). In binary, the digits after the decimal correspond to 1/2, 1/4, 1/8, 1/16, etc., instead of 1/10, 1/100, 1/1000. So representing 0.3 in binary is kind of like, say, representing 10/3 or 22/7 in decimal... You can write it out to a lot of digits, but you can't get it exactly right in that form. Normally, this approximation gets hidden, but when you do arithmetic, these errors get compounded until it starts to become visible. There are thousands of articles on the web about this... Here's one: http://docs.sun.com/source/806-3568/ncg_goldberg.html[^]

        Proud to have finally moved to the A-Ark. Which one are you in? Developer, Author (Guardians of Xen)

        T 1 Reply Last reply
        0
        • I Ian Shlasko

          Computers represent everything in binary, not decimal, and everything has to be crammed into 32, 64, or 128 bits (Depending on the precision). In binary, the digits after the decimal correspond to 1/2, 1/4, 1/8, 1/16, etc., instead of 1/10, 1/100, 1/1000. So representing 0.3 in binary is kind of like, say, representing 10/3 or 22/7 in decimal... You can write it out to a lot of digits, but you can't get it exactly right in that form. Normally, this approximation gets hidden, but when you do arithmetic, these errors get compounded until it starts to become visible. There are thousands of articles on the web about this... Here's one: http://docs.sun.com/source/806-3568/ncg_goldberg.html[^]

          Proud to have finally moved to the A-Ark. Which one are you in? Developer, Author (Guardians of Xen)

          T Offline
          T Offline
          The real M
          wrote on last edited by
          #4

          Thanks for both answers. I understand now ;)

          The more you know the more you know how little you know, you know?

          1 Reply Last reply
          0
          Reply
          • Reply as topic
          Log in to reply
          • Oldest to Newest
          • Newest to Oldest
          • Most Votes


          • Login

          • Don't have an account? Register

          • Login or register to search.
          • First post
            Last post
          0
          • Categories
          • Recent
          • Tags
          • Popular
          • World
          • Users
          • Groups