top | item 46150677

Super-Flat ASTs

100 points| mmphosis | 2 months ago |jhwlr.io

26 comments

order

mitchellh|2 months ago

For a good example of this sort of pattern in the real world, take a look at the Zig compiler source code. I'm sure others might do it but Zig definitely does. I have a now very outdated series on some of the Zig internals: https://mitchellh.com/zig/parser And Andrew's old DoD talk is very good and relevant to this: https://vimeo.com/649009599

More generally, I believe its fair to call this a form of handle-based designs: https://en.wikipedia.org/wiki/Handle_(computing) Which are EXTREMELY useful for a variety of reasons and imo woefully underused above the lowest system level.

sestep|2 months ago

My hypothesis is that handles are underused because programming languages make it very easy to dereference a pointer (you just need the pointer) whereas "dereferencing" a handle requires also having the lookup table in hand at the same time, and that little bit of extra friction is too much for most people. It's not that pointers don't require extra machinery to be dereferenced, it's just that that machinery (virtual memory) is managed by the operating system, and so it's invisible in the language.

My current research is about how to make handles just as convenient to use as pointers are, via a form of context: like a souped-up version of context in Odin or Jai if one is familiar with those, or like a souped-up version of coeffects if one has a more academic background.

pedrozieg|2 months ago

What I like about this writeup is that it surfaces a tension most “let’s build a compiler” tutorials skip: the AST is both a data structure and a UX boundary. Super-flat layouts are fantastic for cache and memory, but they’re hostile to all the things humans care about (debuggable shapes, easy instrumentation, ad-hoc traversals, “just print this node and its children” in a debugger). A lot of production compilers quietly solve that by having two tiers: a nice, inefficient tree for diagnostics and early passes, and increasingly flattened / interned / arena-allocated forms as you move toward optimization and codegen.

The interesting question for me is where the crossover is now that IDEs and incremental compilation dominate the workload. If your front-end is effectively a long-running service, it might be worth keeping a friendlier AST around and only using a super-flat representation for hot paths like analysis passes or bulk refactors. Otherwise you risk saving a few hundred MB while spending engineer-months making every new pass fight the layout.

loeg|2 months ago

What about this representation is hostile to humans and ad-hoc traversals? Don't convenience "getters" basically solve usability?

munificent|2 months ago

It looks like, overall, this design gets the parser about twice as fast as a simple one that creates tree-like ASTs.

That's not nothing. But a parser is rarely the most time-intensive part of a production compiler. And the parser does get iterated on a lot in languages that are evolving and adding new syntax.

Given that, I'd be inclined to take the performance hit and stick with a simpler AST representation if that yields a more hackable, maintainable compiler front end.

benhoyt|2 months ago

That's a good caution. However, traversing a flat AST (iterating a "struct of arrays" rather than a pointer-based tree) is also going to be faster. So the next steps of the compiler, say type checking and code emitting, will also be faster. But how much, or whether it's worth it even then, I'm not sure.

exyi|2 months ago

Usually yes, but it's still a neat trick to be aware of. For interpreted scripting languages, parsing can actually be a significant slowdown. Even more so when we start going into text-based network protocols, which also need a parser (is CSS a programming language or a network protocol? :) )

uaksom|2 months ago

(author here) I agree that it's a lot of complexity, and I acknowledge this in the article: You can get quite far with just a bump allocator.

I didn't go into this at all, but the main benefit of this design is how well it interacts with CPU cache. This has almost no effect on the parser, because you're typically just writing the AST, not reading it. I believe that subsequent stages benefit much more from faster traversal.

(By the way, I am a huge fan of your work. Crafting interpreters was my introduction to programming languages!)

mediumdeviation|2 months ago

For anyone confused by why the text says the performance is improving between each graph but the lines don't seem to show that - the color for each key and the scale changes between graphs.

shoo|2 months ago

It'd also have been interesting to see some overall profiling data of the initial program & some discussion of which optimisations to investigate based on that profiling data.

When investigating performance issues its often very helpful to run with profiling instrumentation enabled and start by looking at some top-down "cumulative sum" profiler output to get a big picture view of which functions/phases are consuming most of the running time, to see where it may be worth spending some effort.

Getting familiar with linux's perf [1] tool is also helpful, both in terms of interpreting summary statistics from perf stat (instructions per cycle, page faults, cache misses, etc) that can give clues what to focus on, but also being able to use it to annotate source line by line with time spent.

I'm not familiar with rust, but e.g. the rustc compiler dev guide has a tutorial on how to profile rustc using perf [2]

[1] Brendan Gregg's Linux perf examples is an excellent place to start https://www.brendangregg.com/perf.html [2] https://rustc-dev-guide.rust-lang.org/profiling/with_perf.ht...

vatsachak|2 months ago

We need to flatten everything. Thanks for the great write up

MrNet32823|2 months ago

Why Mike Acton's data oriented programmmign has not caught up outside game dev and niche languages?

torginus|2 months ago

Personally I think this is a neat trick to organize memory, but don't these kinds of objects packed together in flat buffers bypass the entire lifetime and safety mechanism of Rust?

I mean if you do an off by one error on indices, essentially you are readin the pointer of another node.

loeg|2 months ago

This is a common argument about Rust. Unlike pointer confusion, with index confusion, you still get bounds checking in the containing collection, and you also avoid type confusion (the wrong index element will still have the same type as the object you intended to access). So there are still some benefits.