I was thinking about replying personally but then it is a public article
so I post this to the associated mailing list.
It was somewhat nice and amusing to see "The Scientific Method" applied
to your Bresenham question but the "test" done is not just unfair for
the reasons already mentioned but also *wrong*.
Ignoring the fact that a line drawing device that can operate without a
floating point unit is far cheaper, the Bresenham algorithm has a BIG
plus compared to your slope version - it doesn't accumulate errors
resulting from the finite precision of floating point numbers. The slope
computed by your slope version is quite likely to have an infinite
number of digits, so you already have 1/4 the size of the number
consisting of only the least significant bit as error in average. By
adding the slope in every iteration that error gets accumulated and
there is quite likely to be a case (with not huge line endpoints) were
the total error of the line drawn is bigger than in Bresenham's
algorithm (that means the line is off from the correct one by at least 1
pixel).
So the correct way would be to re-compute the other coordinate in every
iteration, e.g. y = y1 + (y2-y1)*x/(x2-x1). Now by that way you also
have very few instructions every iteration but the division hurts badly!
(It's slow!)
Now looking at the Bresenham algorithm again it is a very fine and fast
one. (The only drawback I can think of is the inner comparison in the
loop; but then again you can get rid of this too by some tricks).
Some final comments to the C++ Bresenham code: Doing conditional jumps
within a loop is also a *bad* thing (for several reasons: at least once
the branch prediction unit will be wrong -> long latency -> slow; those
8 conditions for the different octants take at least log2(8) = 3
comparisons -> makes it also slower) so this makes also an unfair test
condition.
Michael Noisternig
P.S.: BTW, I hope this is conceived as constructive criticism. ;-)