On 2/19/2012 3:34 PM, Alan Wolfe wrote:
Hey Guys,I was curious, does anyone know any metrics about how fast fixed point math operations are compared to floating point math operations on modern desktops/laptops?There's an annoying issue i've been trying to solve in my google native client app where some systems have different levels of floating point math precision (internal float registers being 80 bits sometimes, and sometimes not, different machines spilling calculations onto the stack for temporaries etc) and while i've been talking to the google nacl devs on the mailing list, i'm wondering if going to fixed point may be a decent solution.The worry of mine though is that going to fixed point i'd lose out on any SSE operations as well as perhaps thinking that even well written fixed point math may not be as fast as hardware calculated floating point math.But, of course what would be great is that whatever i see on my machine is what other people would see on theirs, which is really attractive.This is for a raytracer so is very heavy on floating point computation, and apparently pretty sensitive to floating point precision issues.What do you guys think, anyone have any insight or input?
You wouldn't gain much precision-wise by using fixed point unless you've been using 32 bit float (floats generally have only 23 bits of mantissa, while doubles have 52), and very likely would lose performance. Compilers are good at interleaving floating point instructions and the rest of the program, as they usually execute in parallel (at least on modern x86 CPUs). I'd say, rather than jumping on to fixed point math just yet, try to identify why the algorithm is sensitive to floating point imprecision.
Usually there are two strategies you can use to deal with floating point imprecision:
The first one is reorganize the formulas. For example, you're making a ray tracer, so I assume one of the things you would want to do is solve a quadratic equation (used in ray-sphere intersection). We all know how to solve quadratic equations:
x = (-b ± sqrt(b^2 -4*a*c)) / (2*a)However, when the term 4ac is small, with the fixed precision of floats one of the roots can vanish to 0. To avoid that, it's better to obtain one root (pick the sign of the square root, so it is the same -b has), and find the other root with x2 = c / (a * x1)
As a rule of thumb, you want to avoid terms cancelling each other.The other strategy is to have a fast algorithm to solve most of the cases, and use a slower but more precise one just for special cases.