top | item 33711583

Building the fastest Lua interpreter automatically

571 points| hr0m | 3 years ago |sillycross.github.io

109 comments

order
[+] haberman|3 years ago|reply
> With the tail-call approach, each bytecode now gets its own function, and the pathological case for the C/C++ compiler is gone. And as shown by the experience of the Google protobuf developers, the tail-call approach can indeed be used to build very good interpreters. But can it push to the limit of hand-written assembly interpreters? Unfortunately, the answer is still no, at least at its current state.

> The main blockade to the tail-call approach is the callee-saved registers. Since each bytecode function is still a function, it is required to abide to the calling convention, specifically, every callee-saved register must retain its old value at function exit.

This is correct, wasting of callee-saved registers is a shortcoming of the approach I published about protobuf parsing (linked from the first paragraph above). More recently I have been experimenting with a new calling convention that uses no callee-saved registers to work around this, but the results so far are inconclusive. The new calling convention would use all registers for arguments, but allocate registers in the opposite order of normal functions, to reduce the chance of overlap. I have been calling this calling convention "reverse_cc".

I need to spend some time reading this article in more detail, to more fully understand this new work. I would like to know if a new calling convention in Clang would have the same performance benefits, or if Deegen is able to perform optimizations that go beyond this. Inline caching seems like a higher-level technique that operates above the level of individual opcode dispatch, and therefore somewhat orthogonal.

[+] sillycross|3 years ago|reply
> More recently I have been experimenting with a new calling convention that uses no callee-saved registers to work around this, but the results so far are inconclusive. The new calling convention would use all registers for arguments, but allocate registers in the opposite order of normal functions, to reduce the chance of overlap. I have been calling this calling convention "reverse_cc".

As explained in the article, LLVM already has the calling convention you are exactly looking for: the GHC convention (cc 10). You can use it to "pin" registers by passing arguments at the right spot. If you pin your argument in a callee-saved register of the C calling conv, it won't get clobbered after you do a C call.

[+] chc4|3 years ago|reply
Have you read the Copy-and-patch compilation paper? It seems broadly similar to what you're describing, and Deegen. They use GHC's calling convention for it's snippets which has only caller-saved registers and all arguments passed in registers in order to optimize stitching the snippets together (via JIT templating in it, but with tailcalls in your case would probably also be the same design space).
[+] saagarjha|3 years ago|reply
I’ve been working on an early design of a high-performance dynamic binary translator that cannot JIT, and have reached a very similar conclusion as the author. We have an existing threaded interpreter but it’s a mess of hard-to-maintain assembly for two architectures, and we run into funny issues all the time where the two diverge. Plus, being handwritten by people who are not scheduling experts, there is probably some performance left on the table because of our poor choices and the design making it difficult to write complex-but-more-performant code. Nobody wants to write an efficient hash for TLB lookups in a software MMU using GAS macros.

The core point I’ve identified is that existing compilers are pretty good at converting high level descriptions of operations into architecture-specific code (at least, better than we are given the amount of instructions we have to implement) but absolutely awful at doing register selection or dealing with open control flow that is important for an interpreter. Writing everything in assembly lets you do these two but you miss out on all the nice processor stuff that LLVM has encoded into Tablegen.

Anyways, the current plan is that we’re going to generate LLVM IR for each case and run it through a custom calling convention to take that load off the compiler, similar to what the author did here. There’s a lot more than I’m handwaving over that’s still going to be work, like whether we can automate the process of translating the semantics for each instruction into code, how we plan to pin registers, and how we plan to perform further optimizations on top of what the compiler spits out, but I think this is going to be the new way that people write interpreters. Nobody needs another bespoke macro assembler for every interpreter :)

[+] haberman|3 years ago|reply
Great summary, this matches my experience. For straight-line code, modern C compilers can't be beat. But when it comes to register allocation, they constantly make decisions that are real head-scratchers.

One of the biggest problems is when cold paths compromise the efficiency of hot paths. You would hope that __builtin_expect() would help, but from what I can tell __builtin_expect() has no direct impact on register allocation. I wish the compiler would use this information to make sure that cold paths can never compromise the register allocation of the hot paths, but I constantly see register shuffles or spills on hot paths that are only for the benefit of cold paths.

Is there anywhere I can follow your work? I am very interested in keeping track of the state of the art.

[+] qikInNdOutReply|3 years ago|reply
I work on a game, that mostly uses lua as logic code, saddling on a c/c++ engine. One of the engine developers implemented lua jit years ago and found that for the actual performance, the interpreter/jit was neglibable, the most expensive thing being the actual switch between luacode and c-code, which is with a large API a constant ugly background performance loss.

The lua code by itself did not ran long enough to actually profit much from optimizations much.

So, back then we did discuss possible solutions, and one idea was to basically notice a upcoming C-Call in bytecode ahead of execution, detect the stability of the arguments ahead of time. A background thread, extracts the values, perform the Call-processing of arguments and return pushes the values onto the stack, finally setting a "valid" bit to unblock the c-call (which by then actually is no longer a call). Both sides never have a complete cache eviction and live happily ever after. Unfortunatly i have a game dev addiction, so nothing ever came of it.

But similar minds might have pushed similar ideas.. so asking the hive for this jive. Anyone ever seen something similar in the wild?

[+] kzrdude|3 years ago|reply
Super interesting.

I think using the name LuaJIT Remake steps on the toes of the existing LuaJIT and I would advise using a more original name.

Anything distinctive would be better. Lua DeeJIT comes to mind, since the generator is called Deegen.

[+] Diggsey|3 years ago|reply
Yeah... Especially as it's not actually a JIT compiler (yet).
[+] gatane|3 years ago|reply
>More importantly, it is the world’s fastest Lua interpreter to date, outperforming LuaJIT’s interpreter by 28% and the official Lua interpreter by 171% on average on a variety of benchmarks

Wait what the hell

[+] hinkley|3 years ago|reply
Faster than the LuaJIT's interpreter.

People who focus on JIT often focus on the JIT and not the interpreter. Which is a shame because if you make the uncommon paths cheaper then you can tune your code for the hot paths a bit more aggressively. You get fewer situations where you are sacrificing Best Case for Average Case performance.

[+] mraleph|3 years ago|reply
It would be interesting to see 1-1 comparison with LuaJIT interpreter _without corresponding runtime changes_. That would given a meaningful baseline for how much you potentially loose/gain using this approach.

Current measurement presents comparison of two wildly different implementation strategies: LuaJIT Remake uses IC and hidden classes while LuaJIT proper does not. It's a well known fact that ICs+hidden classes can lead to major performance improvements. That insight originated in Smalltalk world and was brought to dynamic language mainstream by V8† - but unfortunately it muddles the comparison here.

† V8 was open sourced on September 2, 2008... Coincidentally (though probably not :)) JSC landed their first IC prototype on September 1, 2008.

[+] sillycross|3 years ago|reply
I agree with your point, but I want to point out that Deegen also provided APIs to hide all the details of inline caching. The bytecode only specifies the body lambda and the effect lambdas, and Deegen lowers it to an efficient implementation automatically, including exotic optimizations that fuses the cached IC effect into the opcode so one indirect branch can be avoided.

If LuaJIT interpreter were to employ IC, it would have to undergo a major rewrite (due to its assembly nature) to have the equally efficient code as LJR (that we generate automatically). This is one advantage of our meta-compiler approach.

Finally, although no experiment is made, my subjective opinion is already in the article:

> LuaJIT’s hand-written assembly interpreter is highly optimized already. Our interpreter generated by Deegen is also highly optimized, and in many cases, slightly better-optimized than LuaJIT. However, the gain from those low-level optimizations are simply not enough to beat LuaJIT by a significant margin, especially on a modern CPU with very good instruction-level parallelism, where having a few more instructions, a few longer instructions, or even a few more L1-hitting loads have negligible impact on performance. The support of inline caching is one of the most important high-level optimizations we employed that contributes to our performance advantage over LuaJIT.

That is, if you compare the assembly between LJR and LuaJIT interpreter, I believe LJR's assembly is slightly better (though I would anticipate only marginal performance difference). But that's also because we employed a different boxing scheme (again...). If you force LJR to use LuaJIT's boxing scheme, I guess the assembly code would also be similar since LJR's hand-written assembly is already optimal or at least very close to optimal.

[+] abecedarius|3 years ago|reply
There's some prior art on DSLs for efficient interpreters, like Anton Ertl's vmgen. https://www.complang.tuwien.ac.at/anton/vmgen/
[+] touisteur|3 years ago|reply
Nice, do you know whether this DSL was ever used successfully fir other endeavours (abstract interpretation, lowering to Why3, any kind of symbolic execution? or a language server?). I've been looking for a 'flex/bison with complete semantics' and it might be a piece of the puzzle.
[+] UncleEntity|3 years ago|reply
The main difference I see is this project seems to be using C++ (plus C macros?) directly instead of a DSL. And some llvm magic thrown in for good measure.
[+] tromp|3 years ago|reply
Nice to see Haskell make an appearance in an article about Lua and C/C++:

> For example, at LLVM IR level, it is trivial to make a function use GHC calling convention (a convention with no callee-saved registers)

This refers to the following section of [1]

“cc 10” - GHC convention This calling convention has been implemented specifically for use by the Glasgow Haskell Compiler (GHC). It passes everything in registers, going to extremes to achieve this by disabling callee save registers. This calling convention should not be used lightly but only for specific situations such as an alternative to the register pinning performance technique often used when implementing functional programming languages. At the moment only X86 supports this convention and it has the following limitations:

On X86-32 only supports up to 4 bit type parameters. No floating-point types are supported. On X86-64 only supports up to 10 bit type parameters and 6 floating-point parameters. This calling convention supports tail call optimization but requires both the caller and callee are using it.

[1] https://llvm.org/docs/LangRef.html#calling-conventions

[+] skissane|3 years ago|reply
I didn't realise LLVM supported such a wide variety of calling conventions.

It would be cool if they supported OpenVMS/x86-64 calling convention–even not on OpenVMS. Why? Well, OpenVMS calling convention has this cool little feature which nobody else seems to have – the parameter count to a function is passed in a register. What this means–people always ask "how can a C variadic function know how many parameters it was passed?" And the usual answer is – you need to add an argument to explicitly pass an argument count (the computation of which can be automated using preprocessor voodoo), or you need to pass some kind of printf-style format argument, or a sentinel value (terminal NULL), or whatever. "Why can't we just have a va_count() which tells us how many arguments we were passed?" "Not possible, per the calling convention, the caller doesn't tell us."

However, for OpenVMS, there actually is such a thing as va_count() – it pulls it from that extra register (the "Argument Information Register"–which also encodes some info on their types, so you can know whether each parameter is passed in an integer register or a floating point one.) Anyway, if LLVM/Clang/etc supported OpenVMS calling convention on other platforms, they could enjoy va_count() too, you'd just have to declare that as the calling convention for your function.

[+] bradrn|3 years ago|reply
Relatedly, it’s worth noting that there are really good bindings between Lua and Haskell [https://hslua.org/], used in e.g. Pandoc for plugins.
[+] gary_0|3 years ago|reply
I think there's a typo where the Add() example code goes `lhs.As<tDouble>() && rhs.As<tDouble>()`. I'm assuming that should be a `+`?
[+] LoganDark|3 years ago|reply
If there was such a typo, it doesn't exist anymore!
[+] jonathanyc|3 years ago|reply
Awesome that the result is backed by benchmarks. The writing style is very clear and having actual code snippets is great. Favorited!

My tldr understanding that:

- writing a byte code interpreter in C++ is slow, and a big reason is callee-saved registers / the compiler doesn't optimize well when multiple operations are implemented in the same "giant switch table" function

- but it's annoying to write everything in assembly

- so the author made a compiler that glues together C++ implementations of byte code instructions (compiled to LLVM IR) into an interpreter, avoiding callee-saved registers (and performs other optimizations)

It'd be interesting to see ablation testing for the other optimizations, like fuzing indices into op codes. My bet would be that avoiding having to restore registers dominated of the performance impact--that was the case for the compiler I implemented with friends in college.

[+] fasteo|3 years ago|reply
It is my understanding that benchmarks are run against LuaJIT interpreter (-j off), so here are some JIT numbers for reference, using the array3d.lua benchmark [1]

  time luajit -j off array3d.lua 
  
  real 0m1,968s
  user 0m1,915s
  sys 0m0,052s
  
  time luajit -j on array3d.lua 
  
  real 0m0,128s
  user 0m0,068s
  sys 0m0,060s

[1] https://github.com/luajit-remake/luajit-remake/blob/master/l...
[+] ufo|3 years ago|reply
One caveat to pay attention to... Ever since LuaJIT and PUC-Lua diverged, PUC lua has made some significant changes to the garbage collector. I wonder if that might affect the comparison vs LuaJIT for memory intensive benchmarks such as binarytrees and tablesort.
[+] LoganDark|3 years ago|reply
As detailed in the article, the garbage collectors for all interpreters tested were disabled for all benchmarks because this interpreter does not yet have garbage collection.
[+] fwsgonzo|3 years ago|reply
I have been working on this too, and while I am not at all interested in embedding LLVM, the lack of calling conventions and lack of ability t make a direct jump is unfortunate.

My biggest pet peeve though is the inability to push unlikely stuff completely to the back of the program. I mean just put it away where nobody can see it. I also want to align code blocks to 1 bytes. Not 16 bytes. Not a single compiler lets me realign the instruction handlers in a way that compresses them down.

I also want to know what BOLT does that improves my interpreter by around 30%.

[+] sillycross|3 years ago|reply
Clang/LLVM accepts (little-known but documented) flags to let you align code block using a custom alignment, though it only works at file-level. See [1].

However, I'm not sure if doing so is useful or necessary. Interpreter performance is sensitive to code layout (which affects hardware branch predictor accuracy), but I don't think there is a general way to optimize the code layout to make the branch predictor as happy as possible.

So if you changed your code alignment and saw a perf change, it's more likely caused by the random perf gain/loss due to code layout change, not because that 1-byte-alignment is better than 16-byte-alignment or vice versa.

[1] https://easyperf.net/blog/2018/01/25/Code_alignment_options_...

[+] chubot|3 years ago|reply
This seems like an awesome way of writing faster interpreters – i.e. not in assembly, but in C++ snippets you stitch together with a tool.

I did peek at the deegen tool a bit, and it seems quite large? https://github.com/luajit-remake/luajit-remake/tree/master/d...

I would be interested in an overview of all the analysis it has to do, which as I understand is basically “automated Mike Pall”

FWIW I think this is the hand-written equivalent with LuaJIT’s dynasm tool: https://github.com/LuaJIT/LuaJIT/blob/v2.1/src/vm_x64.dasc (just under 5000 lines)

Also there are several of these files with no apparent sharing, as you would get with deegen:

https://github.com/LuaJIT/LuaJIT/blob/v2.1/src/vm_x86.dasc

https://github.com/LuaJIT/LuaJIT/blob/v2.1/src/vm_ppc.dasc

[+] sigzero|3 years ago|reply
I wonder why the target was 5.1 since 5.4 has been out for a while now?
[+] scythe|3 years ago|reply
Lua 5.2 introduced a new scoping rule (_ENV) that impacts the ability to implement efficient interpreters, which is why LuaJIT diverged from the main implementation. Insofar as Lua is notoriously the "fastest scripting language", Lua 5.1 is the "fastest Lua". I personally hope that the author eventually targets "LuaJIT flavored Lua", which incorporates some additional features from newer Lua without compromising the performance.

Edit: see replies

[+] interpol_p|3 years ago|reply
Probably because that's where PUC Lua and LuaJIT diverged, would be interesting to see it on 5.4
[+] tomcam|3 years ago|reply
What’s a stack spill?
[+] toast0|3 years ago|reply
A stack spill is passing parameters on the stack instead of in registers. It's unavoidable when the number of parameters exceeds the number of registers available for passing parameters (the parameters spill over onto the stack) or if the parameters don't fit into the registers.
[+] krick|3 years ago|reply
I wonder what makes it so much worse on those 2 specific tasks. Especially k-nukleotide one.
[+] flyingsky|3 years ago|reply
> More importantly, it is the world’s fastest Lua interpreter to date, outperforming LuaJIT’s interpreter by 28% and the official Lua interpreter by 171% on average on a variety of benchmarks

Really huge, if true!

[+] JZL003|3 years ago|reply
wow this is a great article, so readable