On 20 July 2012 20:57, Luke Gorrie <lukego@xxxxxxxxx> wrote: > Howdy! > > I'm interested in using LuaJIT in a realtime application. In particular I'd > like to use LuaJIT code instead of iptables-style patterns in a network > forwarding engine, and I'd like to be able to establish some upper-bounds on > processing time for my own peace of mind. For example, to be able to be > confident that rules of a certain basic complexity level would never take > more than (say) 50us to execute. > > Is this a realistic notion? > > I'm guessing that GC is the main issue to be concerned about. I wonder if > for example I could simply detect when a GC is needed and in that case > simply allocate a fresh new VM instead and discard the old one. (Perhaps I'm > dreaming up a solution to a problem that doesn't exist -- I'll stop here.) LuaJIT has an incremental GC, so (soft) realtime is possible in principle.[*] I'm not sure if your bounds are achievable, though. AFAIK, incremental GC pauses are usually in the millisecond range rather than microseconds -- I could be wrong, though. You can play around with the GC parameters: http://www.lua.org/manual/5.1/manual.html#lua_gc They are supported by LuaJIT. [*]: You probably know that hard realtime is not possible on most hardware due to virtual memory, OS schedulers, and caches. Of course, the best way to avoid GC pauses is to not allocate memory at all. For example, you could preallocate memory from C (or via the ffi) and use that as scratch memory. It sounds like that could work quite well for your application. I think the JIT usually doesn't take longer than 50us. One degenerate case where that might happen is when it is recording a long trace and fails to connect it to anything else. You could lower the "maxrecord" parameter of the JIT (default is 4000), though that may cost you some performance if you manage to exclude valid traces. Mike probably has a better idea of the JIT overhead. / Thomas