Re: Problems while switching C stacks (fibers)

  • From: Konstantin Olkhovskiy <lupus@xxxxxxxxxx>
  • To: luajit@xxxxxxxxxxxxx
  • Date: Sun, 8 Sep 2013 11:40:09 +0400

Hey guys,

Sorry for bumping this old thread... But I need a piece of advise here.

A little bit of a background to make reading easier. We are trying to use
LuaJIT as a high-performance scripting language for our application server,
which uses fibers. ``Fibers'' mean that C stacks are being switched back
and forth to mimic real threads behaviour without real threads overhead.
The design implies intermixing C fibers with lua fibers. Fibers may
communicate over some sort of pipes or using direct access to shared
objects.

As I wrote in the last message, we've used jit.off() for all functions that
switch stacks. But this appeared to add a lot of overhead on top of context
switch itself and we decided to try creating separate VMs for each fiber.
This made context switches two orders of magnitude faster.

But we have no clear understanding of how to organize shared state between
lua fibers. In previous concept it was just a set of global tables. But now
we only can have some service-wide C context structure pointer being
shared. We need some sort of containers(e.g. hash table, linked list,
search tree) being implemented in C and FFI'ed into lua to achieve having
some shared knowledge. There is [1], which implements all of those in a
small and efficient fashion, but it's all macro based, which would require
some wrapper generation for every type... Not really convenient as that
would imply having a shared library with container wrappers for otherwise
``pure lua'' service implementation.

Is there an easier way of implementing this? May be i'm missing something?


[1] https://github.com/attractivechaos/klib

2013/5/7 Konstantin Olkhovskiy <lupus@xxxxxxxxxx>

> Problem resolved by adding ``jit.off(true)'' in functions that call FFI
> functions, which are switching stacks. Usually they are one-line functions
> that wrap the C function and return it's value, so overhead should be
> minimal if I understand correctly.
> I've measured the performance roughly on my current echo server and do not
> see any difference.
>
> Shared cdata with separate lua states would feel unnatural for fibers (to
> our taste), though theoretically should run faster.
>
> Thanks guys! Appreciate your help!
>
>
> 2013/5/6 Coda Highland <chighland@xxxxxxxxx>
>
>> On Mon, May 6, 2013 at 8:38 AM, Konstantin Olkhovskiy <lupus@xxxxxxxxxx>
>> wrote:
>> > 2013/5/6 Mike Pall <mike-1305@xxxxxxxxxx>
>> >>
>> >> Ok, I see. If I understood this right, you're switching C stacks
>> >> inside an FFI call. And the calling Lua code is JIT-compiled. That
>> >> leaves the VM in an incomplete internal state. Which is fine, if
>> >> you don't touch any lua_State belonging to the same VM and you
>> >> switch back later on.
>> >
>> >
>> > Yes, you are right, yield/transfer methods that switch fiber stacks are
>> > FFI functions and calling Lua code may get JIT-compiled eventually.
>> > May I ask what is considered "the same VM"? Currently i use
>> lua_newthread.
>> > Should I consider switching to luaL_newstate?
>>
>> Yes, switch. lua_newthread is basically another coroutine on the same
>> VM instance.
>>
>> > What would be the performance impact in case I use non-optimized Lua
>> > wrappers?
>> > Unfortunately most of the fiber library methods may eventually yield
>> > execution to
>> > some other fiber, which means that most of the api calls will be
>> > non-optimized. Is it
>> > comparable to overhead caused by libcoro context switching itself?
>>
>> If you can use FFI calls to manage this instead of the Lua/C API,
>> you'll be much better off, as you get full performance out of that.
>> Calling the Lua/C API kills performance in LuaJIT.
>>
>> >> Or use separate VM instances and communicate only via cdata.
>> >
>> > Can you elaborate a little bit on this? How can separate VM's
>> communicate?
>> > Via lua_xmove?
>>
>> cdata is a handle to, well, C data. If you hand the cdata to two
>> separate VMs, they can treat it as shared memory (be careful with
>> synchronization of course).
>>
>> /s/ Adam
>>
>>
>
>
> --
> Regards,
> Konstantin
>



-- 
Regards,
Konstantin

Other related posts: