> Rather than getting stuck in front-end minutiae, the tutorial goes straight to generating working assembly code, from very early on.
I think this is important and for a more sophisticated compiler design I find Ghuloum approach very appealing [1]. I.e. build a very simple subset of the language from top to bottom and then grow the meat gradually.
The really great book following this approach I've discovered recently was [2]. Although I find both C and x86 not the best targets for your first compiler, still a very good book for writing your first compiler.
Yeah, I think this is one of the (few, rare) cases where the "official" academic way of teaching the subject is actually baggage and not really aligned with what's practically useful.
Compiler courses are structured like that because parsing really was the most important part, but I'd say in the "modern" world once you have a clear idea of how parsing actually works, it's more important to understand how compilers implement language features.
Even if you want to implement a compiler yourself, "Claude, please generate a recursive descent parser for this grammar" is close to working one-shot.
This is something I've noticed on academic vs "practicing" coders. Academics tend to build in layers, though not always, and "practicing" coders tend to build in pipes give or take. The layers approach might give you buildable code, but is hard to exercise and run. Both approaches can work though, especially if you build in executable chunks, but you have to focus on the smallest chunk you can actually run.
This article sums it up perfectly. I was interested in building a compiler long before going to college and this was the most accessible body of work.
Building a recursive descent parser from scratch was an eye opener to 17yo me on how a seemingly very complex problem that I had no idea how to approach can be made simple by breaking it down into the right primitives.
From the Publications section of that Wikipedia page:
>The April 1971 Communications of the ACM article "Program Development by Stepwise Refinement",[22][23] concerning the teaching of programming, is considered to be a classic text in software engineering.[24] The paper is considered to be the earliest work to formally outline the top-down method for designing programs.[25][26] The article was discussed by Fred Brooks in his influential book The Mythical Man-Month and was described as "seminal" in the ACM's brief biography of Wirth published in connection to his Turing Award.[27][28]
"breaking things down into the right primitives" is the real key to programming. There are many books and web pages about algorithms, but I wish there were more searchable and browsable resources for how to approach problems through primitives.
That's a good point - recursive descent as a general lesson in program design, in addition to being a good way to write a parser.
Table driven parsers (using yacc/etc) used to be emphasized in old compiler writing books such as Aho & Ullman's famous "dragon (front cover) book". I'm not sure why - maybe part efficiency for the slower computers of the day, and part because in the infancy of computing a more theoretical/algorithmic approach seemed more sophisticated and preferable (the cannonical table driven parser building algorithm was one of Knuth's algorithms).
Nowadays it seems that recursive descent is the preferred approach for compilers because it's ultimately more practical and flexible. Table driven can still be a good option for small DSLs and simple parsing tasks, but recursive descent is so easy that it's hard to justify anything else, and LLM code generation now makes that truer than ever!
There is a huge difference in complexity between building a full-blown commercial quality optimizing compiler and a toy one built as a learning exercise. Using something like LLVM as a starting point for a learning exercise doesn't seem very useful (unless your goal is to build real compilers) since it's doing all the heavy lifting for you.
I guess you can argue about how much can be cut out of a toy compiler for it still to be a useful learning exercise in both compilers and tackling complex problems, but I don't see any harm in going straight from parsing to code generation, cutting out AST building and of course any IR and optimization. The problems this direct approach causes for code generation, and optimization, can be a learning lesson for why a non-toy compiler uses those!
A fun approach I used at work once, wanting to support a pretty major C subset as the language supported by a programmable regression test tool, was even simpler ... Rather than having the recursive descent parser generate code, I just had it generate executable data structures - subclasses of Statement and Expression base classes, with virtual Execute() and Value() methods respectively, so that the parsed program could be run by calling program->Execute() on the top level object. The recursive descent functions just returned these statement or expression values directly. To give a flavor of it, the ForLoopStatement subclass held the initialization, test and increment expression class pointers, and then the ForLoopStatement::Execute() method could just call testExpression->Value() etc.
You have the exact assembly algorithm in the page. What you see it's what you get. Now, for real, I'd suggest getting lvltl (VTL-02) interpreter
written in C for a "bigger" language running not just under a VM, but for small machines and simulators such as the 6502 based Kim-1 and Apple1. With that "not enough to be called Basic" a Sokoban might be written with a bit of patience.
> Jack Crenshaw's tutorial takes the syntax-directed translation approach, where code is emitted while parsing, without having to divide the compiler into explicit phases with IRs.
Is "syntax-directed translation" just another term for a single-pass compiler, e.g. as used by Lua (albeit to generate bytecode instead of assembly / machine code)? Or is it something more specific?
> in the latter parts of the tutorial it starts showing its limitations. Especially once we get to types [...] it's easy to generate working code; it's just not easy to generate optimal code
So, using a single-pass compiler for a statically-typed language makes it difficult to apply type-based optimizations. (Of course, Lua sidesteps this problem because the language is dynamically typed.)
Are there any other downsides? Does single-pass compilation also restrict the level of type checking that can be performed?
As long as your target language has a strict define-before-use rule and no advanced inference is required you will know the types of expressions, and can perform type-based optimizations. You can also do constant folding and (very rudimentary) inlining. But the best optimizations are done on IRs, which you don't have access to in an old-school single pass design. LICM, CSE, GVN, DCE, and all the countless loop opts are not available to you. You'll also spill to memory a lot, because you can't run a decent regalloc in a single pass.
I'm actually a big fan a function-by-function dual-pass compilation. You generate IR from the parser in one pass, and do codegen right after. Most intermediate state is thrown out (including the AST, for non-polymorphic functions) and you move on to the next function. This give you an extremely fast data-oriented baseline compiler with reasonable codegen (much better than something like tcc).
Pros:
* uses Python and recursive descent parsing
* separates front and backend via an IR
* generates ELF binaries (either x86 or ARM)
* meant for real world use
Cons:
* more complex
* not written in a tutorial style
LLVM makes it so much easier to build a compiler - it's not even funny. Whenever I use it, I feel like I'm just arranging some rocks on a top of a pyramid.
Using LLVM is an indirect approach that will limit the quality of your compiler.
When one looks at languages that use LLVM as a backend, there is one consistent property: slow compilation. Because of how widespread LLVM is, we often seem to accept this as a fact of life and that we are forced to make a choice between fast runtime code and a fast compiler. This is a false choice.
Look at two somewhat recent languages that use LLVM as a backend: zig and rust. The former has acknowledged that LLVM is an albatross and are in the process of writing their own backends to escape its limitations. The latter is burdened with ridiculous compilation times that will never get meaningfully better so long as they avoid writing their own backend.
Personally, I find LLVM a quite disempowering technology. It creates the impression that its complexity is necessary for quality and performance and makes people dependent on it instead of developing their own skills. This is not entirely dissimilar to another hot technology with almost the same initials.
Crenshaw's tutorial doesn't quite reach the point where it becomes useful, but precedence-climbing is a rather straightforward refactoring of recursive descent that becomes extremely useful when there are many precedence levels.
Naturally, the author has also written an article about it, albeit without the explanation of the refactoring that links it to RD:
Having similar reasoning, I would up writing a tiny-optimizing-compiler tutorial that only explains how to write a middle and back end of a compiler: https://github.com/bollu/tiny-optimising-compiler
I printed this out (so I could have it with me everywhere) and read it when I was younger. It was so cool to see it come together so quickly. Some of these works (this one, Beej's guides, etc) are some of the best CS documentation we have and don't get nearly the credit they deserve.
> Rather than getting stuck in front-end minutiae, the tutorial goes straight to generating working assembly code, from very early on
Good summary.
I had no background in compilers or related theory but read Jack Crenshaw's Let's Build a Compiler tutorials some time ago. My main take away from reading half a dozen or so of these tutorials was that building a simple compiler for a toy language was a small project that was well within my grasp and ability, not a huge undertaking that required mastery of esoteric pre-requisites or a large amount of planning.
I got a lot of enjoyment messing about with toy compiler projects related to Brainfuck.
Why Brainfuck? It's a beautiful little toy language. Brainfuck has 8 instructions, each instruction is 1 character, so parsing reduces to getting a char and switching on it. I guess it depends on what you want to explore. If you want to focus on writing recursive descent parsers, not the best choice!
One initial project could be to compile (transpile) from Brainfuck source to C source. You can do this as a source to source compiler without any internal representation by transforming each Brainfuck operation to a corresponding C statement. Brainfuck is specified in terms of a single fixed length array of bytes, and a pointer - an index into that array - that can be moved around, and basic manipulations of the byte it is pointing it. So on the C side you need two variables: one for the array and a second, an index for the pointer.
A second project could be compiling from Brainfuck to assembly language, skipping C. You'd need to read a few tutorials/reference docs about your chosen assembly language and learn how to run the assembler to compile tiny assembly programs into native executables. You could explore some examples of what output assembly programs you get when you compile small Brainfuck programs to C and then compile those C programs to assembly. You could write a direct source to source compiler without an internal representation, where each Brainfuck operation is directly mapped to a snippet of assembly instructions. Once you've got this working, you can compile a Brainfuck program into an assembly program, and then use the usual toolchain to assemble that into a native executable and run it.
There's also lots of projects in another direction, treating Brainfuck as the target language. Imagine that your job is to write Brainfuck programs for a CPU that natively executes Brainfuck. Try writing a few tiny Brainfuck programs by hand and savour how trying to do almost anything involves solving horrible little puzzles. Maybe it'd be much easier to do your job if you, the Brainfuck programmer, didn't have to manually track which index of the array is used to store what. You could invent a higher level language supporting concepts like local variables, where you could add two local variables together and store the results in a third local variable! Maybe you could allow the programmer to define and call their own functions! Maybe you could support `if` blocks, comparisons! You could have a compiler that manages the book-keeping of memory allocation and mapping complex high level abstractions such as integer addition into native Brainfuck concepts of adding one to things and moving left or right. Projects in this direction let you explore more stuff about parsers (the input syntax for your higher level language is richer), internal representations, scopes and so on.
ernst_klim|2 months ago
I think this is important and for a more sophisticated compiler design I find Ghuloum approach very appealing [1]. I.e. build a very simple subset of the language from top to bottom and then grow the meat gradually.
The really great book following this approach I've discovered recently was [2]. Although I find both C and x86 not the best targets for your first compiler, still a very good book for writing your first compiler.
[1] http://scheme2006.cs.uchicago.edu/11-ghuloum.pdf
[2] https://norasandler.com/2024/08/20/The-Book-Is-Here.html
qsort|2 months ago
Compiler courses are structured like that because parsing really was the most important part, but I'd say in the "modern" world once you have a clear idea of how parsing actually works, it's more important to understand how compilers implement language features.
Even if you want to implement a compiler yourself, "Claude, please generate a recursive descent parser for this grammar" is close to working one-shot.
emeraldd|2 months ago
statictype|2 months ago
Building a recursive descent parser from scratch was an eye opener to 17yo me on how a seemingly very complex problem that I had no idea how to approach can be made simple by breaking it down into the right primitives.
fuzztester|2 months ago
https://en.wikipedia.org/wiki/Niklaus_Wirth
From the Publications section of that Wikipedia page:
>The April 1971 Communications of the ACM article "Program Development by Stepwise Refinement",[22][23] concerning the teaching of programming, is considered to be a classic text in software engineering.[24] The paper is considered to be the earliest work to formally outline the top-down method for designing programs.[25][26] The article was discussed by Fred Brooks in his influential book The Mythical Man-Month and was described as "seminal" in the ACM's brief biography of Wirth published in connection to his Turing Award.[27][28]
xyproto|2 months ago
scotty79|2 months ago
HarHarVeryFunny|2 months ago
Table driven parsers (using yacc/etc) used to be emphasized in old compiler writing books such as Aho & Ullman's famous "dragon (front cover) book". I'm not sure why - maybe part efficiency for the slower computers of the day, and part because in the infancy of computing a more theoretical/algorithmic approach seemed more sophisticated and preferable (the cannonical table driven parser building algorithm was one of Knuth's algorithms).
Nowadays it seems that recursive descent is the preferred approach for compilers because it's ultimately more practical and flexible. Table driven can still be a good option for small DSLs and simple parsing tasks, but recursive descent is so easy that it's hard to justify anything else, and LLM code generation now makes that truer than ever!
There is a huge difference in complexity between building a full-blown commercial quality optimizing compiler and a toy one built as a learning exercise. Using something like LLVM as a starting point for a learning exercise doesn't seem very useful (unless your goal is to build real compilers) since it's doing all the heavy lifting for you.
I guess you can argue about how much can be cut out of a toy compiler for it still to be a useful learning exercise in both compilers and tackling complex problems, but I don't see any harm in going straight from parsing to code generation, cutting out AST building and of course any IR and optimization. The problems this direct approach causes for code generation, and optimization, can be a learning lesson for why a non-toy compiler uses those!
A fun approach I used at work once, wanting to support a pretty major C subset as the language supported by a programmable regression test tool, was even simpler ... Rather than having the recursive descent parser generate code, I just had it generate executable data structures - subclasses of Statement and Expression base classes, with virtual Execute() and Value() methods respectively, so that the parsed program could be run by calling program->Execute() on the top level object. The recursive descent functions just returned these statement or expression values directly. To give a flavor of it, the ForLoopStatement subclass held the initialization, test and increment expression class pointers, and then the ForLoopStatement::Execute() method could just call testExpression->Value() etc.
anthk|2 months ago
The T3X language it's very Pascal like and fun to use (and portable: DOS/Win/Unix/CPM...).
Also, as an intro, with Zenlisp you can get a mini CS-101 a la SICP or CACS but simpler and explained in a much easier way.
anthk|2 months ago
https://github.com/howerj/subleq/
As a goodie you can run Eforth on top which almost writtes itself. Compiler, interpreter, editor, IDE and a Sokoban, all in a simple VM.
Let's scale. Mu808/n808. Interpreters in C and AWK, a compiler in Python.
https://codeberg.org/luxferre/n808
You have the exact assembly algorithm in the page. What you see it's what you get. Now, for real, I'd suggest getting lvltl (VTL-02) interpreter written in C for a "bigger" language running not just under a VM, but for small machines and simulators such as the 6502 based Kim-1 and Apple1. With that "not enough to be called Basic" a Sokoban might be written with a bit of patience.
pansa2|2 months ago
Is "syntax-directed translation" just another term for a single-pass compiler, e.g. as used by Lua (albeit to generate bytecode instead of assembly / machine code)? Or is it something more specific?
> in the latter parts of the tutorial it starts showing its limitations. Especially once we get to types [...] it's easy to generate working code; it's just not easy to generate optimal code
So, using a single-pass compiler for a statically-typed language makes it difficult to apply type-based optimizations. (Of course, Lua sidesteps this problem because the language is dynamically typed.)
Are there any other downsides? Does single-pass compilation also restrict the level of type checking that can be performed?
dist1ll|2 months ago
I'm actually a big fan a function-by-function dual-pass compilation. You generate IR from the parser in one pass, and do codegen right after. Most intermediate state is thrown out (including the AST, for non-polymorphic functions) and you move on to the next function. This give you an extremely fast data-oriented baseline compiler with reasonable codegen (much better than something like tcc).
pjmlp|2 months ago
A sigle pass compiler can still split the various phases, and only do the code generation on the last phase.
dang|2 months ago
Let's Build a Compiler (1988) - https://news.ycombinator.com/item?id=38773049 - Dec 2023 (15 comments)
Let's Build a Compiler (1988) - https://news.ycombinator.com/item?id=36054416 - May 2023 (19 comments)
Let’s Build a Compiler (1995) - https://news.ycombinator.com/item?id=22346532 - Feb 2020 (41 comments)
Let's Build a Compiler (1995) - https://news.ycombinator.com/item?id=19890918 - May 2019 (18 comments)
Let’s Build a Compiler (1995) - https://news.ycombinator.com/item?id=6641117 - Oct 2013 (56 comments)
Let’s Build a Compiler (1995) - https://news.ycombinator.com/item?id=1727004 - Sept 2010 (17 comments)
Let’s Build a Compiler (1995) - https://news.ycombinator.com/item?id=232024 - June 2008 (5 comments and already complaining about reposts)
Let's build a compiler (dated, but very good) - https://news.ycombinator.com/item?id=63004 - Oct 2007 (2 comments)
It seems there aren't any (interesting) others? I expected more.
But there is this bonus:
An Interview with Jack Crenshaw, Author of the “Let’s Build a Compiler” - https://news.ycombinator.com/item?id=9502977 - May 2015 (0 comments, but good article!)
bokchoi|2 months ago
userbinator|2 months ago
muth02446|2 months ago
Pros: * uses Python and recursive descent parsing * separates front and backend via an IR * generates ELF binaries (either x86 or ARM) * meant for real world use
Cons: * more complex * not written in a tutorial style
pjmlp|2 months ago
Which by the way, it is still active, https://compilers.iecc.com/index.phtml
hashtag-til|2 months ago
mutkach|2 months ago
norir|2 months ago
When one looks at languages that use LLVM as a backend, there is one consistent property: slow compilation. Because of how widespread LLVM is, we often seem to accept this as a fact of life and that we are forced to make a choice between fast runtime code and a fast compiler. This is a false choice.
Look at two somewhat recent languages that use LLVM as a backend: zig and rust. The former has acknowledged that LLVM is an albatross and are in the process of writing their own backends to escape its limitations. The latter is burdened with ridiculous compilation times that will never get meaningfully better so long as they avoid writing their own backend.
Personally, I find LLVM a quite disempowering technology. It creates the impression that its complexity is necessary for quality and performance and makes people dependent on it instead of developing their own skills. This is not entirely dissimilar to another hot technology with almost the same initials.
cxr|2 months ago
<https://web.archive.org/web/20210208162458/https://www.cs.co...>
Discussed several times on HN: <https://hn.algolia.com/?query=cs6120%20advanced%20compilers>
(And discussion about the blog post from last year about the IL used in the course: <https://news.ycombinator.com/item?id=41084318>.)
userbinator|2 months ago
Naturally, the author has also written an article about it, albeit without the explanation of the refactoring that links it to RD:
https://eli.thegreenplace.net/2012/08/02/parsing-expressions...
bollu|2 months ago
yuppiemephisto|2 months ago
HumblyTossed|2 months ago
shoo|2 months ago
Good summary.
I had no background in compilers or related theory but read Jack Crenshaw's Let's Build a Compiler tutorials some time ago. My main take away from reading half a dozen or so of these tutorials was that building a simple compiler for a toy language was a small project that was well within my grasp and ability, not a huge undertaking that required mastery of esoteric pre-requisites or a large amount of planning.
I got a lot of enjoyment messing about with toy compiler projects related to Brainfuck.
Why Brainfuck? It's a beautiful little toy language. Brainfuck has 8 instructions, each instruction is 1 character, so parsing reduces to getting a char and switching on it. I guess it depends on what you want to explore. If you want to focus on writing recursive descent parsers, not the best choice!
One initial project could be to compile (transpile) from Brainfuck source to C source. You can do this as a source to source compiler without any internal representation by transforming each Brainfuck operation to a corresponding C statement. Brainfuck is specified in terms of a single fixed length array of bytes, and a pointer - an index into that array - that can be moved around, and basic manipulations of the byte it is pointing it. So on the C side you need two variables: one for the array and a second, an index for the pointer.
A second project could be compiling from Brainfuck to assembly language, skipping C. You'd need to read a few tutorials/reference docs about your chosen assembly language and learn how to run the assembler to compile tiny assembly programs into native executables. You could explore some examples of what output assembly programs you get when you compile small Brainfuck programs to C and then compile those C programs to assembly. You could write a direct source to source compiler without an internal representation, where each Brainfuck operation is directly mapped to a snippet of assembly instructions. Once you've got this working, you can compile a Brainfuck program into an assembly program, and then use the usual toolchain to assemble that into a native executable and run it.
There's also lots of projects in another direction, treating Brainfuck as the target language. Imagine that your job is to write Brainfuck programs for a CPU that natively executes Brainfuck. Try writing a few tiny Brainfuck programs by hand and savour how trying to do almost anything involves solving horrible little puzzles. Maybe it'd be much easier to do your job if you, the Brainfuck programmer, didn't have to manually track which index of the array is used to store what. You could invent a higher level language supporting concepts like local variables, where you could add two local variables together and store the results in a third local variable! Maybe you could allow the programmer to define and call their own functions! Maybe you could support `if` blocks, comparisons! You could have a compiler that manages the book-keeping of memory allocation and mapping complex high level abstractions such as integer addition into native Brainfuck concepts of adding one to things and moving left or right. Projects in this direction let you explore more stuff about parsers (the input syntax for your higher level language is richer), internal representations, scopes and so on.
eliben|2 months ago
azhenley|2 months ago