Re: Is LuaJIT a good design choice for my use case

  • From: Mike Pall <mike-1411@xxxxxxxxxx>
  • To: luajit@xxxxxxxxxxxxx
  • Date: Thu, 13 Nov 2014 11:37:00 +0100

Andrew Groh wrote:
> So my idea for building this is a combination of nginx, C
> (and/or C++), and luaJIT.
> 
> Nginx would handle the http requests, parameter parsing, C would
> be glue code/bookkeeping/utilities to be really fast.
> 
> We would then GENERATE lua code from our web app that would sent
> to the bidder where it would be JIT’d and really fast (I hope).

On a tangent:

I don't know how complex your scripts are, but it seems not very
much. Then your throughput is more likely limited by all of the
layers around it: the kernel, the SSL/TLS layer, the HTTP engine
and how it interfaces with your scripts.

Mind you, nginx is a fine choice and has an excellent HTTP parser.
But, by design, it has to be completely generic and handle
everything in the universe that speaks some dialect of HTTP. That
has a cost.

Likewise, OpenSSL adds considerable layering costs. And I'm not
talking about the encryption itself, which easily runs faster than
the CPU can fill the caches, nowadays.

Simply calling into the LuaJIT VM for every request may be as
costly (or more costly) than the script processing itself, too.

If you can make some assumptions about the requests the exchange
sends to the bidder, you'll be able to implement a much leaner
stack with much higher throughput. But, of course, it's a tradeoff
between engineering resources and hardware buying power. Benchmark
often & early and choose wisely.

[
In a past life, back then when HTTP/1.0 reigned, I wrote such a
high-speed HTTP stack. Handling a raw stream of network packets in
user mode (yikes). 99.99% of all requests fit into the first data
packet and were well-formed. E.g. check that it ends with two
CRLFs, has proper URLs, has the right set of headers, etc. Packets
that fit all of the assumptions went through the fast path.
Anything else was forwarded to a full-blown application stack.

This approach gave tremendous speedups. And it was the only way to
handle the giant amount of traffic for that specific application,
back then.

Nowadays, it doesn't matter that much anymore, since CPUs have
become so fast and cheap. Simply throwing hardware at the problem
may be more affordable than investing in engineering and maintenance
of a specialty solution.
]

--Mike

Other related posts: