Hacker News new | past | comments | ask | show | jobs | submit login

"LuaJIT's interpreter (!) beats V8's JIT compiler in 6 out of 8 benchmarks and is not too far off in another one"

http://lambda-the-ultimate.org/node/3851?a=1#comment-57761




This was back in 2010 when V8 did not really have an optimizing compiler - V8's "compiler" was a baseline one essentially gluing together individual interpretation patterns.

Also any cross-language comparison should be done very accurately - because we are talking about different language semantics and different benchmark implementations.


well, the point wasn't "an interpreter is always faster than a jit", but "a good interpreter can get a large portion of the gains you'd get from a compiler".

If you prefer apples to apples, quoting Mike Pall again[0]

"the LJ1 JIT compiler is not much faster than the LJ2 interpreter, sometimes it's worse".

[0]: http://lambda-the-ultimate.org/node/3851#comment-57646


If improvements in A gives you gains, whereas the use of some other B also yields more gains, then you can always say that any of the A gains are "a portion of" the B gains.

A compiler that only beats interpretation by 2:1 is either a poor compiler or something else is going on, like most of the work actually being done by subroutines that cannot be whose performance is not being affected by the compilation (Because, for instance, they are written in C and intrinsic in the language run-time).

There do not have to be explicit calls to such functions. For instance, compiled arithmetic that is heavy on large bignums will probably not be much faster than its interpreted version, because cycles are actually spent in processing the arrays of bignum digits (or "limbs"), which is done in some bignum library code. The code being compiled looks innocuous; it just has formulas like (a+b)*c, but these turn into bignum library calls. Since the bignum library is written in C and compiled, the calls run equally fast whether called by interpreted or compiled code. That's where most of the time is spent, and so compiling the interpreted code makes no difference overall, even if 90% or more of the time spent there is knocked out by the compiler. (Amdahl's Law.)


LuaJIT1 compiler was also pretty basic in the way it worked - it did not do any major speculative optimizations afaik, so comparing LJ2 interpreter against it is not of much interest.

Now check out performance graphs of LuaJIT2 compiler+interpreter vs interpreter modes[1].

Anything remotely computationally expensive is 2x faster with compiler and you can go up to 28x for integer number crunching.

I do believe that original point "a good interpreter can get a large portion of the gains you'd get from a compiler" can't be correct simply because it is too broad and ill-defined. What are the gains you expect from the compiler? How can "large portion" be defined? All of these really depend on many things: from the language itself to concrete design decisions in compiler/interpreter.

[1] http://luajit.org/performance_x86.html




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: