Re: best practice

  • From: Laurent Deniau <Laurent.Deniau@xxxxxxx>
  • To: "<luajit@xxxxxxxxxxxxx>" <luajit@xxxxxxxxxxxxx>
  • Date: Sat, 27 Jun 2015 08:36:57 +0000

On Jun 26, 2015, at 9:18 AM, Javier Guerra Giraldez <javier@xxxxxxxxxxx> wrote:

On Fri, Jun 26, 2015 at 1:37 AM, Laurent Deniau <Laurent.Deniau@xxxxxxx>
Micro benchmarks show that the performances are the same for all cases

always be suspicious of "same" benchmarks. often that means you're
benchmarking some other thing, and your target is either optimized
away, or lost in the noise.

The function mul are carefully crafted in an order that makes more and more
assumption on the jit to see how the optimiser propagate explicit and implicit
constants to minimise the number of operations. The benchmark cancels (or scan
all) this potential optimisation to measure real case measurements by flipping
on each iteration from real to imaginary to complex and back to real in a short
cycle. I wanted also to measure the numerical stability of such computation
compared to the same code written in C99.

with LuaJIT, be sure to call your code many times, (many thousands, at
least) with no non-compiled code in the loop. test with the profiler,
to see that your loop stays "100% compiled"

1e9 loops of the code is enough, profiling is useless, disassembling can be

The C version is to compare speed and precision across platforms.

The result can also be compared to a^n but it is less portable (rely on
external libc implementations) and less accurate.


Other related posts: