[openbeos] Re: The Scientific Method aka Bresenham NewsletterArticle

  • From: Michael Noisternig <michael.noisternig@xxxxxxxxx>
  • To: openbeos@xxxxxxxxxxxxx
  • Date: Tue, 10 Jun 2003 02:20:12 +0200

Martin Krastev wrote:

"Michael Noisternig" <michael.noisternig@xxxxxxxxx> wrote:


Whether it's a _serious_ problem is probably subjective. Fact is that the slope code doesn't draw the line entirely correctly.


hmm, i'd say exactly the opposite - it's a fact that one of the fundamental 
problems
of fractional arithmetics is LSB errors,

I totally agree with you. However, calling the line drawing problem serious seemed a bit drastic for me ;-).


whereas whethert the slope code draws
correctly is rather subjective :)

[snip]

I'll do it explicitely for your linear function y = ax + b:
a = 1/10  <-- a < 1/10 due to finite number of digits
y = a*5 + 0  <-- y < 0.5 --> round(y) = 0


yes, if the only place you use round() is for the float->int conversion

As I already said, if you round 'a' to the LSB then it becomes > 1/10 and when you draw the line backwards 'y' evaluates to 0 again.

Actually you shouldn't start with 10 because the value of the LSB of 10 is greater than that of 0.1 so when adding the small number 0.1 to 10 then 0.1 gets rounded and the error is compensated. Start with 0 instead (or find any of many other combinations).




see my comments at the end of this post.


Evaluating this point by the division variant or by Bresenham's algorithm gives you the right answer 1!

hmm, would you elaborate why the division variant would produce 1?

as above: (5*1)/10 + 0 = 0.5 (exactly!), round(0.5) = 1 or reverse: (-5*1)/10 + 1 = 0.5


ok, i formulated my question erroneously (as the division variant does produce 
.5 indeed).
what i actually meant to say was: why do you think that in general the division 
variant is more
accurate than the slope variant? but if all you meant is that it just handles 
consistently the
'.5-transient' cases then yes - it does. but it's prone to the finite 
representation problem
of fractionals just as well (i.e. (2*1)/10 is as erroneously represented in fp 
as 2 * 1/10 is)

This is not true. If a,b,c are whole numbers then (a*b)/c gets always rounded right. If b/c has infinite digits in binary presentation then a*(b/c) doesn't necessarily get rounded right.


Note: I agree that the linear function case' error is usually far smaller than that from the slope addition function. However here is a great example that demonstrates that only the division code is right:

(500*1001)/1000 = 500500/1000 = 500.5, round -> 501
500*(1001/1000) = 500*1.000999... = 500.4999..., round -> = 500
1001/1000 + 1001/1000 + ... (500 times) = 500.4999..., round -> = 500


but let's get back to the order invariance question. and the fact is the question is pretty much non-standing. why? because line plotters often, and in some particular cases _mandatory_, draw lines in one direction only, regardless of the order the verices were passed in. i.e. lines are treated not as vectors but as non-directional primitives, and as such line plotters are free to decide in what order to draw them. so, what finally matters w/ line plotters/edge evaluators is numerical precision, which condition i believe we both agree is well satisfied.

-blu

ps:

"Michael Noisternig" <michael.noisternig@xxxxxxxxx> wrote:

Actually it makes no difference whether you do it Mr. Phipps' way or your linear function way because the _absolute_ error is the same in both cases. The multiplication in your linear function also multiplies the error in the LSB which then is the same as the accumulated error by subsequent additions.


the absolute error would be the same if we can guarantee that the slope
is a always a _rational_ number.

You are right. See above.


PS: Too tired now after 2 hours of finding a good example. Rest comes tomorrow.


Other related posts: