[openbeos] Re: The Scientific Method aka Bresenham Newsletter Article

  • From: "Michael Phipps" <mphipps1@xxxxxxxxxxxxxxxx>
  • To: openbeos@xxxxxxxxxxxxx
  • Date: Mon, 09 Jun 2003 19:01:31 -0400

>I was thinking about replying personally but then it is a public article 
>so I post this to the associated mailing list.

Cool!

>It was somewhat nice and amusing to see "The Scientific Method" applied 
>to your Bresenham question but the "test" done is not just unfair for 
>the reasons already mentioned but also *wrong*.

!!!

>Ignoring the fact that a line drawing device that can operate without a 
>floating point unit is far cheaper, the Bresenham algorithm has a BIG 

Cheaper in terms of time? Or the hardware required to run it (a big issue for 
embedded work)?
I set out to find out if #1 was the case. #2 is not an issue for us. ;-) In 
fact, IIRC, Be chose a floating point gfx system because the Hobbits and the 
PPCs had a (relatively) fast FPU. IIRC.

>plus compared to your slope version - it doesn't accumulate errors 
>resulting from the finite precision of floating point numbers. The slope 
>computed by your slope version is quite likely to have an infinite 
>number of digits, so you already have 1/4 the size of the number 
>consisting of only the least significant bit as error in average. By 
>adding the slope in every iteration that error gets accumulated and 
>there is quite likely to be a case (with not huge line endpoints) were 
>the total error of the line drawn is bigger than in Bresenham's 
>algorithm (that means the line is off from the correct one by at least 1 
>pixel).

I didn't consider this. Given the 80 bit nature of the FPU and the (fairly) 
small numbers that we are using, it wouldn't seem like it could ever be a more 
than 1 pixel error. I did see differences between the two algorithms, but I 
really had/have no real way to prove error. I guess it would be visually? Or by 
running the points through some other algorithm. Since it is really more of a 
rounding error (AFAICT), that would be an interesting call.

>So the correct way would be to re-compute the other coordinate in every 
>iteration, e.g. y = y1 + (y2-y1)*x/(x2-x1). Now by that way you also 
>have very few instructions every iteration but the division hurts badly! 
>(It's slow!)
>Now looking at the Bresenham algorithm again it is a very fine and fast 
>one. (The only drawback I can think of is the inner comparison in the 
>loop; but then again you can get rid of this too by some tricks).

Yes. It looked OK to me.

>Some final comments to the C++ Bresenham code: Doing conditional jumps 
>within a loop is also a *bad* thing (for several reasons: at least once 
>the branch prediction unit will be wrong -> long latency -> slow; those 
>8 conditions for the different octants take at least log2(8) = 3 
>comparisons -> makes it also slower) so this makes also an unfair test 
>condition.

Is that how the compiled code worked out? 
That is certainly not the way that I would have written it in ASM (I would have 
used a jump table).

>Michael Noisternig
>
>P.S.: BTW, I hope this is conceived as constructive criticism. ;-)

Absolutely 100% constructive. Very interesting.


Other related posts: