Re: [ANN] LuaJIT Roadmap 2012/2013

  • From: Adam Strzelecki <ono@xxxxxxx>
  • To: luajit@xxxxxxxxxxxxx
  • Date: Wed, 13 Jun 2012 23:45:14 +0200

> Allocation sinking doesn't work that way. The used escape analysis
> checks for escaping pointers, not escaping values. Together with
> store-to-load forwarding, only the allocation and the related
> stores remain. If the stores can be sunk, so can the allocation,
> provided there are no escaping pointers. In SSA terms: there's no
> other 'use' for the allocation return value.

Thanks for clarifying that. I recall the discussion from Feb this year:
http://lua-users.org/lists/lua-l/2012-02/msg00207.html
where you've said that allocation sinking with native vector (& SIMD matrix) 
types may be solution to avoid temporary allocations for matrix/vector 
expressions.

But now it seems this isn't the case, at least not the goal of allocation 
sinking here.

Can you please give more details then what is the main goal for upcoming 
allocation sinking? (some sample)

My own concern is using Lua together with OpenGL, where preparing various 
modelView matrices are supposed to be done on CPU, but this is kinda too slow 
to do it in pure LuaJIT+FFI - depending on the scene I get 2-5x lower framerate 
than for similar code in C++, just because GC.

> Sure, I might add that optimization in the future. But: a) it won't
> work on regular table objects and b) it's an extension and not a
> replacement for allocation sinking. Stack allocation alone is less
> efficient. You still need SRA to move all computations to registers
> and then (maybe) eliminate the allocation.

Well I believe SRA would work like a charm for me. I've made mmul function 
taking 32 arguments and outputting 16 arguments (so this is kind of manual SRA) 
which does 4x4 matrix multiplication and this solution was much much more 
robust than my "glua" FFI solution using FFI C structs of 16 doubles and 
metatable based operators.

Regards,
-- 
Adam Strzelecki | nanoant.com | twitter.com/nanoant


Other related posts: