Re: Are trace compilers suited for static languages?

  • From: Robin Heggelund Hansen <skinneyz89@xxxxxxxxx>
  • To: luajit@xxxxxxxxxxxxx
  • Date: Fri, 29 Nov 2013 16:28:06 +0100

Once again, great answer thank you :)


2013/11/29 Mike Pall <mike-1311@xxxxxxxxxx>

> Robin Heggelund Hansen wrote:
> > I remember reading a quote from Mike Pall, that you receive the best
> > performance by using a trace compiler for dynamic languages (or something
> > like that). Will a trace compiler give better results than a hotspot
> > compiler for static languages as well, like, say, Java?
>
> [There's no such thing as a 'hotspot compiler' (well, ok, it's the
> nickname for Oracle's JVM). Hotspot detection is a common approach
> for region selection and used by most JIT compilers.
>
> What you're probably asking is whether the advantages of trace
> compiler over traditional method-at-a-time compilers apply for
> static languages, too.]
>
> Dynamic languages impose special needs on compilers. A just-in-time
> trace compiler happens to be a sweet spot for that. Static languages
> of similar semantic complexity are strictly less demanding.
>
> Corollary #1: One doesn't need a JIT compiler or a trace compiler to
> get acceptable performance for static languages. Historically, this
> has influenced language preferences and the direction that compiler
> research took in the past decades.
>
> Corollary #2: Trace compilers _are_ well suited to static languages,
> since their needs are a subset of the needs of dynamic languages.
> It's just that few people have tried and even fewer had the
> perseverance to bring a trace compiler for a static language to
> production level.
>
> My personal view on that: traditional static, method-at-a-time
> compilers have grown to complexity levels that dwarf any difficulties
> faced by a trace compiler vs. a method-at-a-time compiler. The
> challenges of optimal region selection are largely outweighed by the
> simplicity of applying advanced optimizations to traces and the ease
> of applying specialization.
>
> It's not an accident that e.g. LuaJIT beats (say) the JVM or the CLR
> on allocation elimination (*). Traditional escape analysis is
> ineffective for dynamic languages, so one has to turn to allocation
> sinking. This is an advanced and very powerful algorithm, but tricky
> to implement. It was not that hard to add it on top of LuaJIT's trace
> compiler, but IMHO adding it to the code base of any state-of-the-art
> method-at-a-time compiler would be a tough job.
>
> (*) http://wiki.luajit.org/Allocation-Sinking-Optimization
>
> http://www.reddit.com/r/programming/comments/1rneb7/net_struct_performance_c_c_java_and_javascript/
>
> Ultimately, trace compilers offer much better code generation and a
> much simpler compiler architecture. But it's a lot of research and a
> lot of work to be fully competitive with top-of-the-line compilers
> for static languages. Simply because you have to catch up with the
> decades of research and all of the manpower that has been put into
> making them generate fast machine code.
>
> It's time for a change.
>
> --Mike
>
>

Other related posts: