[gameprogrammer] Re: Fixed point vs floating point in modern computers

  • From: Alan Wolfe <alan.wolfe@xxxxxxxxx>
  • To: gameprogrammer@xxxxxxxxxxxxx
  • Date: Fri, 24 Feb 2012 11:29:27 -0800

"The trouble is that floating point implementations are a mess."

That's what i'm hitting and it makes me sad hehe

"1) be very careful about the stability of the numerical methods you use.
That is the best way to avoid problems cause by variations in precision."

Agreed... I've investigated a little into this but so far no luck on
finding the problem.  I'm also making sure and be as "smart" as possible
about the math.  For instance as the ray walks through my world grid, I
don't check if the ray is between the beginning and end time (the time
period of the ray being inside the cell), i check that it's less than the
end time since if you are close enough to the edge of a cell, and use
perhaps slightly different equations for the start and end, you could have
different values and get into problems that way where an intersection fell
between cells :P

"So, maybe you should look at how to do all this in the GPU, rather that in
the CPU."

That would be really awesome.  I would love to do that, but for the google
"Native Client" at least, they don't expose a way to do that which is kind
of sad.  I think you are right though, it might be really awesome to make
it able to run on the GPU with OpenCL or something just to see how much
faster it is (hopefully WAY fast)
On Fri, Feb 24, 2012 at 11:08 AM, Bob Pendleton <bob@xxxxxxxxxxxxx> wrote:

> Yeah.... I've done a lot of flxed point code back in the bad old days
> when there was no question that integer operations were faster than
> floating point and if you knew what you were doing you could always do
> better with fixed point. That hasn't been true for a long time now.
> The trouble is that floating point implementations are a mess.
>
> For modern desktops/laptops you are talking about x86 and x86_64
> except when you are talking about ARM, right? OK, on x86 you have two
> pretty much different versions of floating point. You have x86
> floating point and x86_64 floating point. Sorta kinda what? Aren't
> they supposed to be IEEE floating point?
>
> The x86 has the 80 bit wide FPU with an internal stack. it gets used
> for 32 bit and 64 bit floating point arithmetic. When ever possible
> intermediate results are kept on the stack. That means that even 32
> bit floating point computations often get done in 80 bit precision.
> And, so do 64 bit computations. On the x86_64 you have all the SSE*
> instructions and the xmm* registers which let you do a lot more a lot
> faster. But, when you use those instructions 32 bit operations are
> done with 32 bit intermediate results and 64 bit operations are done
> with 64 bit intermediate results.
>
> The result... is that you get a lot more acculturated round off error
> when you don't have 80 bit intermediate results. You can see it even
> in porting pure double precision code to and from x86 and x86_64.
>
> You can hunt for an option to force the compiler to use only FPU
> instructions in your code. Which will slow your code on all machines
> with SSE instructions. But, that will not change the way code in the
> libraries works, you will still not always see the same results across
> architectures.
>
> My suggestion would be to 1) be very careful about the stability of
> the numerical methods you use. That is the best way to avoid problems
> cause by variations in precision. And 2) learn to live with the
> problem. Have at least two testing machines so your can make sure the
> errors are not horrible.
>
> The other thing to think about is that the most recent AMD cpus have
> cut down the number of FPUs and reduced the total floating point
> performance of the CPU. Why would they do that? Well, the tested and
> measured and saw that end users don't do much floating pointing point.
> Most floating point is done in the GPU and AMD has put GPUs on the
> same die as the CPU so they have lots of floating point power sitting
> over in the GPU. So, maybe you should look at how to do all this in
> the GPU, rather that in the CPU.
>
> Bob Pendleton
>
>
> On Sun, Feb 19, 2012 at 3:34 PM, Alan Wolfe <alan.wolfe@xxxxxxxxx> wrote:
> > Hey Guys,
> >
> > I was curious, does anyone know any metrics about how fast fixed point
> math
> > operations are compared to floating point math operations on modern
> > desktops/laptops?
> >
> > There's an annoying issue i've been trying to solve in my google native
> > client app where some systems have different levels of floating point
> math
> > precision (internal float registers being 80 bits sometimes, and
> sometimes
> > not, different machines spilling calculations onto the stack for
> temporaries
> > etc) and while i've been talking to the google nacl devs on the mailing
> > list, i'm wondering if going to fixed point may be a decent solution.
> >
> > The worry of mine though is that going to fixed point i'd lose out on any
> > SSE operations as well as perhaps thinking that even well written fixed
> > point math may not be as fast as hardware calculated floating point math.
> >
> > But, of course what would be great is that whatever i see on my machine
> is
> > what other people would see on theirs, which is really attractive.
> >
> > This is for a raytracer so is very heavy on floating point computation,
> and
> > apparently pretty sensitive to floating point precision issues.
> >
> > What do you guys think, anyone have any insight or input?
>
>
>
> --
> +-----------------------------------------------------------
> + Bob Pendleton: writer and programmer
> + email: Bob@xxxxxxxxxxxxx
> + web: www.TheGrumpyProgrammer.com <http://www.thegrumpyprogrammer.com/>
>
> ---------------------
> To unsubscribe go to http://gameprogrammer.com/mailinglist.html
>
>
>

Other related posts: