Skip to content
  • Categories
  • Recent
  • Tags
  • Popular
  • World
  • Users
  • Groups
Skins
  • Light
  • Cerulean
  • Cosmo
  • Flatly
  • Journal
  • Litera
  • Lumen
  • Lux
  • Materia
  • Minty
  • Morph
  • Pulse
  • Sandstone
  • Simplex
  • Sketchy
  • Spacelab
  • United
  • Yeti
  • Zephyr
  • Dark
  • Cyborg
  • Darkly
  • Quartz
  • Slate
  • Solar
  • Superhero
  • Vapor

  • Default (No Skin)
  • No Skin
Collapse
Code Project
  1. Home
  2. General Programming
  3. C#
  4. Lane Tracking and Distancing

Lane Tracking and Distancing

Scheduled Pinned Locked Moved C#
csharpdatabaseiotalgorithmsbeta-testing
11 Posts 4 Posters 0 Views 1 Watching
  • Oldest to Newest
  • Newest to Oldest
  • Most Votes
Reply
  • Reply as topic
Log in to reply
This topic has been deleted. Only users with topic management privileges can see it.
  • S sebogawa

    Hello Everyone, So, I am working on a robot for the Inteligent Ground Vehicle Competition (IGVC 2010) which will be held in june at Lawrence University. We have a robot that uses a SICK LMS 220 laser range finder as the object sensor, two webcams to track obstacle course boundry lines, and a GPS system. My job is to get the webcams to detect the white lines and track the distance of the line to the robot in order to keep the robot withing the lines (which are 10 feet apart and spray painted on grass). I got everything working in terms of "detecting" the white lines by using color filtering to select the color RGB value of the white lines via a reference image captured from the course. So it displays the white lines as white pixels on a black background. This works very well. An edge detection algorithm was also placed which does the job. Now I implemented a Projected Plane Transformation Matrix to calculate distances of points on the webcam's video feed via a calibration method using four points of known distances to the robot. This works suprisingly well also. No I arrived at a bit of a dillema; what part of the line shall i use to determine the distance of the line to the robot? Should I get the distances of every white pixel on the plane? Should I pick a few points that trigger an event when scanning horizontal lines on the image? Closest pixel to the robot? What about differing noise (rare) from white line edge pixels? Any feedback would help, this seems like a very fun topic which I know will tickle a few peoples' fancy. :) Enjoy! Quick Notes: -I am using the AForge.NET library for image processing -I am using the C# Matrix Library (CSML.dll) taken from an article on this site for the plane transformation -There are two cameras on the robot, one on each side -The cameras I am using is the Microsoft LifeCam Cinema (2.0mp Widescreen HD) -The image plane is 320x240 and in 1bpp grayscale P.S. Once I finish, I will write an article about the robot for the website for everyone to enjoy!

    K Offline
    K Offline
    Keith Barrow
    wrote on last edited by
    #2

    Hi, I answered your question (in a hand waving manner) a while back in the Q&A. http://www.codeproject.com/answers/41166/Vehicle-Lane-Tracking-Algorithm.aspx[^] I've a couple of questions: How quickly will the robot go? What is the track layout (is it tarmac, is it curved/oval)?

    Dalek Dave: There are many words that some find offensive, Homosexuality, Alcoholism, Religion, Visual Basic, Manchester United, Butter. Pete o'Hanlon: If it wasn't insulting tools, I'd say you were dumber than a bag of spanners.

    S 1 Reply Last reply
    0
    • S sebogawa

      Hello Everyone, So, I am working on a robot for the Inteligent Ground Vehicle Competition (IGVC 2010) which will be held in june at Lawrence University. We have a robot that uses a SICK LMS 220 laser range finder as the object sensor, two webcams to track obstacle course boundry lines, and a GPS system. My job is to get the webcams to detect the white lines and track the distance of the line to the robot in order to keep the robot withing the lines (which are 10 feet apart and spray painted on grass). I got everything working in terms of "detecting" the white lines by using color filtering to select the color RGB value of the white lines via a reference image captured from the course. So it displays the white lines as white pixels on a black background. This works very well. An edge detection algorithm was also placed which does the job. Now I implemented a Projected Plane Transformation Matrix to calculate distances of points on the webcam's video feed via a calibration method using four points of known distances to the robot. This works suprisingly well also. No I arrived at a bit of a dillema; what part of the line shall i use to determine the distance of the line to the robot? Should I get the distances of every white pixel on the plane? Should I pick a few points that trigger an event when scanning horizontal lines on the image? Closest pixel to the robot? What about differing noise (rare) from white line edge pixels? Any feedback would help, this seems like a very fun topic which I know will tickle a few peoples' fancy. :) Enjoy! Quick Notes: -I am using the AForge.NET library for image processing -I am using the C# Matrix Library (CSML.dll) taken from an article on this site for the plane transformation -There are two cameras on the robot, one on each side -The cameras I am using is the Microsoft LifeCam Cinema (2.0mp Widescreen HD) -The image plane is 320x240 and in 1bpp grayscale P.S. Once I finish, I will write an article about the robot for the website for everyone to enjoy!

      S Offline
      S Offline
      Som Shekhar
      wrote on last edited by
      #3

      One very important aspect that is missing here is the angle at which these webcam will measure the points. There needs to be a levelor on the vehicle that measure the angle of webcam from horizon (Which cannot be assumed since the vehicle will be moving). Once you have that angle, you could take the distance between those white lines at a particular angle from horizon to normalize all distances. This distance will always be correct.

      S 1 Reply Last reply
      0
      • S sebogawa

        Hello Everyone, So, I am working on a robot for the Inteligent Ground Vehicle Competition (IGVC 2010) which will be held in june at Lawrence University. We have a robot that uses a SICK LMS 220 laser range finder as the object sensor, two webcams to track obstacle course boundry lines, and a GPS system. My job is to get the webcams to detect the white lines and track the distance of the line to the robot in order to keep the robot withing the lines (which are 10 feet apart and spray painted on grass). I got everything working in terms of "detecting" the white lines by using color filtering to select the color RGB value of the white lines via a reference image captured from the course. So it displays the white lines as white pixels on a black background. This works very well. An edge detection algorithm was also placed which does the job. Now I implemented a Projected Plane Transformation Matrix to calculate distances of points on the webcam's video feed via a calibration method using four points of known distances to the robot. This works suprisingly well also. No I arrived at a bit of a dillema; what part of the line shall i use to determine the distance of the line to the robot? Should I get the distances of every white pixel on the plane? Should I pick a few points that trigger an event when scanning horizontal lines on the image? Closest pixel to the robot? What about differing noise (rare) from white line edge pixels? Any feedback would help, this seems like a very fun topic which I know will tickle a few peoples' fancy. :) Enjoy! Quick Notes: -I am using the AForge.NET library for image processing -I am using the C# Matrix Library (CSML.dll) taken from an article on this site for the plane transformation -There are two cameras on the robot, one on each side -The cameras I am using is the Microsoft LifeCam Cinema (2.0mp Widescreen HD) -The image plane is 320x240 and in 1bpp grayscale P.S. Once I finish, I will write an article about the robot for the website for everyone to enjoy!

        L Offline
        L Offline
        Luc Pattyn
        wrote on last edited by
        #4

        Hi, I haven't done these things myself, however I would not bother about distances too much, instead I would replace the two white lines by a single line; a simple interpolation should yield that. Then aim for a point on the imaginary line, at a reasonable (but not very important) distance in front of you, and update your aiming point while moving. :)

        Luc Pattyn [Forum Guidelines] [Why QA sucks] [My Articles]


        Getting an article published on CodeProject now is hard and not sufficiently rewarded.


        S 1 Reply Last reply
        0
        • S Som Shekhar

          One very important aspect that is missing here is the angle at which these webcam will measure the points. There needs to be a levelor on the vehicle that measure the angle of webcam from horizon (Which cannot be assumed since the vehicle will be moving). Once you have that angle, you could take the distance between those white lines at a particular angle from horizon to normalize all distances. This distance will always be correct.

          S Offline
          S Offline
          sebogawa
          wrote on last edited by
          #5

          The cameras are set at the edges of the front of the chassis of the robot at a 45 on the x, and a 45 degree on the -z. The matrix calculates the distancing regardless of known angle by comparing the four known points to the field of view of the camera. My issue is not in distancing, but rather in accurately knowing the distance and angle of the closest point of each line.

          1 Reply Last reply
          0
          • K Keith Barrow

            Hi, I answered your question (in a hand waving manner) a while back in the Q&A. http://www.codeproject.com/answers/41166/Vehicle-Lane-Tracking-Algorithm.aspx[^] I've a couple of questions: How quickly will the robot go? What is the track layout (is it tarmac, is it curved/oval)?

            Dalek Dave: There are many words that some find offensive, Homosexuality, Alcoholism, Religion, Visual Basic, Manchester United, Butter. Pete o'Hanlon: If it wasn't insulting tools, I'd say you were dumber than a bag of spanners.

            S Offline
            S Offline
            sebogawa
            wrote on last edited by
            #6

            The only issue would be how to handle avoiding an obstacle and keep between the lines. The trapazoid will not give accurate readings as to how small the space in which the robot can turn (at its blind spot) without going over a white line or hitting an obstacle. My plan was to apply the distances to each white point on the camera's view onto the field of view of the lidar (which otherwise can NOT detect painted white lines) and use that to calculate the possible approach. I just think distancing and angling every white point on the image would be very time consuming processing wise.

            K 1 Reply Last reply
            0
            • L Luc Pattyn

              Hi, I haven't done these things myself, however I would not bother about distances too much, instead I would replace the two white lines by a single line; a simple interpolation should yield that. Then aim for a point on the imaginary line, at a reasonable (but not very important) distance in front of you, and update your aiming point while moving. :)

              Luc Pattyn [Forum Guidelines] [Why QA sucks] [My Articles]


              Getting an article published on CodeProject now is hard and not sufficiently rewarded.


              S Offline
              S Offline
              sebogawa
              wrote on last edited by
              #7

              The interpolation method would be fine if there were no obstacles that can take up 80% of the track width, I would need to allow the robot to deviate from the imaginary interpolated line almost to the point of crossing the real lines, as well as not being able to see the white lines up close where an obstacle is two feet away from a white line and that is the only method of progressing the course. I have seen a few robots fail last year in this point. The cameras interpolated the lines but when the robot got to a close pinch between obstacle and line, they merely ran through the white line and got back on course but rules state u can not pass lines...ever...

              L 1 Reply Last reply
              0
              • S sebogawa

                The interpolation method would be fine if there were no obstacles that can take up 80% of the track width, I would need to allow the robot to deviate from the imaginary interpolated line almost to the point of crossing the real lines, as well as not being able to see the white lines up close where an obstacle is two feet away from a white line and that is the only method of progressing the course. I have seen a few robots fail last year in this point. The cameras interpolated the lines but when the robot got to a close pinch between obstacle and line, they merely ran through the white line and got back on course but rules state u can not pass lines...ever...

                L Offline
                L Offline
                Luc Pattyn
                wrote on last edited by
                #8

                I see, thanks for that. I'll be looking forward to your article. Is there a web site on the whole event? :)

                Luc Pattyn [Forum Guidelines] [Why QA sucks] [My Articles]


                Getting an article published on CodeProject now is hard and not sufficiently rewarded.


                S 1 Reply Last reply
                0
                • L Luc Pattyn

                  I see, thanks for that. I'll be looking forward to your article. Is there a web site on the whole event? :)

                  Luc Pattyn [Forum Guidelines] [Why QA sucks] [My Articles]


                  Getting an article published on CodeProject now is hard and not sufficiently rewarded.


                  S Offline
                  S Offline
                  sebogawa
                  wrote on last edited by
                  #9

                  Yes www.IGVC.org, we have 72 days till competition! Wish us luck! :-D

                  L 1 Reply Last reply
                  0
                  • S sebogawa

                    Yes www.IGVC.org, we have 72 days till competition! Wish us luck! :-D

                    L Offline
                    L Offline
                    Luc Pattyn
                    wrote on last edited by
                    #10

                    Thanks. Of course I wish you the best of luck. Did you ask CodeProject for sponsoring and/or publicity? I'm not sure they do, they just might. :)

                    Luc Pattyn [Forum Guidelines] [Why QA sucks] [My Articles]


                    Getting an article published on CodeProject now is hard and not sufficiently rewarded.


                    1 Reply Last reply
                    0
                    • S sebogawa

                      The only issue would be how to handle avoiding an obstacle and keep between the lines. The trapazoid will not give accurate readings as to how small the space in which the robot can turn (at its blind spot) without going over a white line or hitting an obstacle. My plan was to apply the distances to each white point on the camera's view onto the field of view of the lidar (which otherwise can NOT detect painted white lines) and use that to calculate the possible approach. I just think distancing and angling every white point on the image would be very time consuming processing wise.

                      K Offline
                      K Offline
                      Keith Barrow
                      wrote on last edited by
                      #11

                      I'd assumed you'd need something more like: http://cmm.ensmp.fr/~beucher/prom_sta.html[^] As you can see the lanes on the road form "Trapezoids" which can be detected, this doesn't help you, as you have a very different problem.

                      sebogawa wrote:

                      I just think distancing and angling every white point on the image would be very time consuming processing wise.

                      Yes, but complicated tasks such as these are heavy-duty! One thing that strikes me immediately is that you should look at your camera/image resolution, it doesn't need to be very high and this will drastically cut down your processing costs. Assuming the surface is flatish and the camera is at a constant height and angle, you can calculate the distance of every pixel in the image when the rig is calibrated (i.e. once) as each will always appear at the same distance due to the geometry being constant. Again, at calibration, you then create an ordered (by distance) dictionary of pixel co-ordinates. When you start in earnest it is just a matter of rattling through the image to find the nearest white pixel. Triaging the areas in the image into "Don't worry- too far away", "need to start planning", "urgent" could again reduce your costs. Things that could complicate this are a) obstacles being mistaken for lines (if they are white) b) The robot heading off in the wrong direction! Keep us informed as this sort of thing is interesting!!!!

                      Dalek Dave: There are many words that some find offensive, Homosexuality, Alcoholism, Religion, Visual Basic, Manchester United, Butter. Pete o'Hanlon: If it wasn't insulting tools, I'd say you were dumber than a bag of spanners.

                      1 Reply Last reply
                      0
                      Reply
                      • Reply as topic
                      Log in to reply
                      • Oldest to Newest
                      • Newest to Oldest
                      • Most Votes


                      • Login

                      • Don't have an account? Register

                      • Login or register to search.
                      • First post
                        Last post
                      0
                      • Categories
                      • Recent
                      • Tags
                      • Popular
                      • World
                      • Users
                      • Groups