`[<<][math][>>][..]`
Thu Jun 26 15:18:38 CEST 2008

## The Hough Transform

```Scan an image and collect histogram information for the parameters
that define the lines in an image.

The parametric equation for a line is the set of points

{p0 + a p0' | a in |R}

where p0' is the vector p_0 rotated by +90 degree. or with p0 = r0(cos
t0, sin t0) the set

{ (r0 cos t0 - a sin t0, r0 sin t0 + a cos t0) | a in |R}

eliminating a in these 2 scalar equiations gives the equation of ponts
(x,y) in terms of the parameters (r0,t0)

LINE = { (x,y) | x cos t0 + y sin t0 = r0 }

Turning this around, given a point (x0,y0) all the lines that pass
through it are parameterized by the equation

BUNDLE = { (r,t) | x0 cos t + y0 sin t = r }

Given this, how to find lines? We let the points vote for bundles,
then find intersections (hot spots) in bundle parameter space.

For each point in an image, record its brightness, and use that as a
vote for all the lines through that point. This can be done by
dividing the (r,t) bundle parameter space in a rectangular grid, and
let each point (x0,y0) vote for the points that are in the bundle it
defines.

If the number of angle bins is in the same order as the number of
input point bins, this algorithm casts N votes per point (x0,y0)
giving cubic complexity O(N^3)

Simplifications:

* Since this is a voting algorithm, knowing where to NOT to look can
reduce the search space.

* Thresholding: don't let dark points vote: If a point doesn't lie on
an edge, it doesn't need to update the array either. If possible,
reduce the input to a binary image.

Subdivision:

As long as the number of angle bins is kept constant, the algorithm
has constant complexity whenever it is subdivided. The quality of the
estimates probably goes down, as there are less discriminiating votes.
In our case (radial lens distortion) subdivision is probably
necessary.

Algorithm:

For all points: if white then update accus r(t) = x0 cos t + y0 sin t

Implementation:

Loop order: working per scanline y allows the expression (y sin t)
to be constant, it thus needs to be drawn only once, after the
accumulated intensity for that scanline is known. Since the same trick
is not possible for the x, this only halves the number of updates, so
is probably not worth it: its easier to transform the (x,y) into
amplitude + phase, which also halves the update time.

Memory access: the input can be streamed, but the state update goes
all over the place. Since the input, if binary, has very little
memory, it might be more interesting to perfrom random access on the
input image, and stream the accumulators. This requires for each point
in the 2D histogram to assemble a list of points that contribute to
it. The equation for this list of points is not a line, but a small
line bundle. (otherwise bresenham algo might have been used). This
does however raise the interesting question: can an approximate
algorithm be used that does scan lines like that? Can the image be
preprocessed so lines get a bit 'fatter' ?

Summarized:

* Staying with the naive implementation, the iteration goes column
wise over the input points, and for each point a sinusoid is added
to the histogram bins.

* Going the experimental route: algoritm might be reversed so that
each (r,t) parameter can be directly computed using a Bresenhem
based function evaluation kernel. This makes sense if the image
data fits in cache, or can be made to use prefetching (lines are
quite predictable access patterns). Probably numerical gradient
can be used to do this iteratively: this would allow for
non-histogram based incremental approximation. ( However, the
space around the optimal points in the picture here:
http://en.wikipedia.org/wiki/Hough_transform are not really
smooth. In fact, their shape is quite odd. Is that an artifact of
chopping things in bins? )

Note that the last comment is called the Radon transform
`[Reply][About]`
`[<<][math][>>][..]`