Extrapolating lines to fill in missing sections
-
I am trying to write some code to reconstruct a small irregular shaped "hole" in an image where the image data has been lost. My hope is that by analysing the surrounding image I can make an intelligent guesses as to what the missing part might look like. Using standard edge recognition techniques I have managed to identify strong contrast edges in the surrounding image. The next step is to identify which of these edge lines cross the hole and to extrapolate them across. The image at http://www.bracknellbridge.com/images/Thinned1.bmp[] shows where I'm at. The dark surround represents part of the image for which data is intact and the light grey centre is the area I am trying to reconstruct. The white line segment above and below the light grey area represents a high contrast edge. To the human eye it looks like this line probably crosses the light grey area. What I need is a general algorithm to recognise this situation and extrapolate the line across the hole. It has been suggested that the Hough transform might be the way to go but I'm struggling with how to apply this in a practical sense. Is the idea to consider every possible pair of points and calculate the parametric equation of the line passing through them, then see which values are most common? It sounds simple in theory but I can think of practical problems. If points A and B are on line 1 which is very close, but not identical, to line 2 which joins C and D how can I ensure they are interpreted as 2 votes for "nearly" the same line? How close is close enough? If anyone has some practical advice or ideas I should be very interested. A practical code example of the Hough transform would be especially useful. Keith
-
I am trying to write some code to reconstruct a small irregular shaped "hole" in an image where the image data has been lost. My hope is that by analysing the surrounding image I can make an intelligent guesses as to what the missing part might look like. Using standard edge recognition techniques I have managed to identify strong contrast edges in the surrounding image. The next step is to identify which of these edge lines cross the hole and to extrapolate them across. The image at http://www.bracknellbridge.com/images/Thinned1.bmp[] shows where I'm at. The dark surround represents part of the image for which data is intact and the light grey centre is the area I am trying to reconstruct. The white line segment above and below the light grey area represents a high contrast edge. To the human eye it looks like this line probably crosses the light grey area. What I need is a general algorithm to recognise this situation and extrapolate the line across the hole. It has been suggested that the Hough transform might be the way to go but I'm struggling with how to apply this in a practical sense. Is the idea to consider every possible pair of points and calculate the parametric equation of the line passing through them, then see which values are most common? It sounds simple in theory but I can think of practical problems. If points A and B are on line 1 which is very close, but not identical, to line 2 which joins C and D how can I ensure they are interpreted as 2 votes for "nearly" the same line? How close is close enough? If anyone has some practical advice or ideas I should be very interested. A practical code example of the Hough transform would be especially useful. Keith
Indirectly you do consider every possible pair of points, but algorithmically it's a bit simpler. Start with an array representing the unknown region, initiallized to all zeroes. Then go through each known point in the original image, and if it's white, trace out line segments in your array that go through this point (at different angles), incrementing the array elements each line intersects. You'll get a lot of "noise", but the highest values in the array will be along the missing segment because all the known white points will combine their "votes" along this line.
-
Indirectly you do consider every possible pair of points, but algorithmically it's a bit simpler. Start with an array representing the unknown region, initiallized to all zeroes. Then go through each known point in the original image, and if it's white, trace out line segments in your array that go through this point (at different angles), incrementing the array elements each line intersects. You'll get a lot of "noise", but the highest values in the array will be along the missing segment because all the known white points will combine their "votes" along this line.
Alan Thanks for your response. I'm still floundering a little. When you say to start with an array representing the unknown region I presume you mean one array element per (unknown) pixel? It's the next bit I'm having real trouble with. Trace out line segments at different angles. How many different angles? It's always possible to construct a line which passes through any two given points, so for every white image pixel and every pixel in the unknown area it will always be possible to construct a line, at some angle, which intersects both. It seems to me, therefore, that if you try enough angles then for any one white image pixel, every pixel in the unknown area will get some votes, with the pixels nearest to the white image pixel getting the most because a wider range of angles will result in a line which passes through them due to rounding errors. I suspect I may have misunderstood something in your explanation. Keith
-
Alan Thanks for your response. I'm still floundering a little. When you say to start with an array representing the unknown region I presume you mean one array element per (unknown) pixel? It's the next bit I'm having real trouble with. Trace out line segments at different angles. How many different angles? It's always possible to construct a line which passes through any two given points, so for every white image pixel and every pixel in the unknown area it will always be possible to construct a line, at some angle, which intersects both. It seems to me, therefore, that if you try enough angles then for any one white image pixel, every pixel in the unknown area will get some votes, with the pixels nearest to the white image pixel getting the most because a wider range of angles will result in a line which passes through them due to rounding errors. I suspect I may have misunderstood something in your explanation. Keith
Yes, one array element per unknown pixel. You'll have to experiment with different numbers of angles. Too many, and every point will cover the whole unknown region. Too few, and you'll miss reinforcing the pixels over the true line. Also you'll want to limit the length of the segments you trace from each white point, so you don't have irrelevant distant pixels influencing your analysis. You want to select values such that the pixels in the unknown region that are colinear with the white lines get the most reinforcement. After you trace from the white pixels, you need to threshold the unknown region, i.e. consider every pixel with a sum above some value to be white, and all others black. This will take some experimentation to determine the best value.