[openbeos] Re: The Scientific Method aka BresenhamNewsletter Article

  • From: "Martin Krastev" <blue@xxxxxxxxx>
  • To: openbeos@xxxxxxxxxxxxx
  • Date: Sat, 7 Jun 2003 23:07:25 +0300

Michael Noisternig <michael.noisternig@xxxxxxxxx> wrote:

> Martin Krastev wrote:
> > to keep the error within the precisional error margins of the media used 
> > (i.e. not to have
> > any additional cumulative error) all you need to do is to use a 
> > non-incremental evaluator,
> > and you end up with 1x multiplication and 1x addition per iteration, no 
> > unit-errors
> > whatsoever*, and last but not least you end up with no conditions in the 
> > inner loop.
> 
> That's not true at all!
> 
> I'll give you an example:
> Draw a line from (0,0) to (10,1).
> With the slope function (and that includes your linear function) you'll 
> get a slope of 1/10 = 0.1 in base 10 which is 0.199999999.... in base 
> 16. Since you only have limited precision (and it doesn't matter how 
> much precision you have at _all_ as long as there is a _finite_ number 
> of digits) you'll always end up with a number that is smaller than 1/10

to overcome this indeed _serious_ problem point arithmetic units (be it fpu's
or software implementations) utilize rounding modes, in our case that would be 
'nearest'*
but at the end we need a lesser precision number (say, a whole nunber) so the 
fact that
we lose precision _in_the_LSB_ of our binary fraction numer don't not move us 
to the least,
as we can avoid _accumulation_ errors.

* which, of course, just reverses the problem - the nearest 
finately-representable
binary fractional number to our original decimal fraction is _inevitably_ 
greater (by mag.)
than the original number (so the problem is actually the inverse of what you 
described).

> (e.g. by using IEEE 32 bit floats you'll get 0,099999904632568359375).

that would be 'chop' round mode. let's re-check that using 'neares' rounding 
mode

0.10000000149011612

> Now if you evaluate the pixel with x-position 5 using the slope code 
> (and it is completely irrelevant whether you take the incremental 
> variant or your linear function variant) you'll get a y-position of 
> always smaller than 0.5 (for 32 bit floats you'll get 
> 0,499999523162841796875) which after rounding becomes 0!

what makes you think that at x-position = 5 y should be = 1?
the line is of lenght 11 pixels, x-pos 5 is the middle pixel.
use Bresenham to draw it from (0, 0) to (10,1) and then from
(10, 1) to (0, 0) - you may be surprized.

> I'll do it explicitely for your linear function y = ax + b:
> a = 1/10  <-- a < 1/10 due to finite number of digits
> y = a*5 + 0  <-- y < 0.5 --> round(y) = 0

yes, if the only place you use round() is for the float->int conversion

> Evaluating this point by the division variant or by Bresenham's 
> algorithm gives you the right answer 1!

hmm, would you elaborate why the division variant would produce 1?

-blu



Other related posts: