> Instead, we'll be single-pass: code generation happens during parsing
IIRC, C was specifically designed to allow single-pass compilation, right? I.e. in many languages you don't know what needs to be output without parsing the full AST, but in C, syntax directly implies semantics. I think I remember hearing this was because early computers couldn't necessarily fit the AST for an entire code file in memory at once
It explains the memory limits and what happened :)
> After the TMG version of B was working, Thompson rewrote B in itself (a bootstrapping step). During development, he continually struggled against memory limitations: each language addition inflated the compiler so it could barely fit, but each rewrite taking advantage of the feature reduced its size. For example, B introduced generalized assignment operators, using x=+y to add y to x. The notation came from Algol 68 [Wijngaarden 75] via McIlroy, who had incorporated it into his version of TMG. (In B and early C, the operator was spelled =+ instead of += ; this mistake, repaired in 1976, was induced by a seductively easy way of handling the first form in B's lexical analyzer.)
You're exactly right. This makes for a small, memory-efficient compiler. But this entails a lot of compromises that we're not willing to put up with anymore, because there's no longer a reason to.
I'm not sure, haven't looked at the codebases of old compilers in a long time. Definitely a lot of the language is pretty amenable to it, especially if you have unstructured jumps for e.g. the for advancement statement. I had a distinct feeling while writing the compiler every time I added a new feature that "wow, the semantics work exactly how I'd like them to for ease of implementation."
Compare that to, say, Rust, which would be pretty painful to single-pass compile with all the non-local behavior around traits.
In C, nothing you have not parsed yet (what it to the right) is necessary for what you've already parsed (what lies to the left). (Necessary for type checking it or translating it.)
E.g. to call a function later in the file, you need a prior declaration. Or else, an implicit one (possibly wrong) will be assumed from the call itself.
This is not true in some C++ situations, like class declarations. In a class definition there can be functions with bodies. Those are inline functions. The functons can freely refer to each other in either direction. Type checking a class declaration therefore requires all of it to be parsed.
A one pass language is advantageous even if you're building a serious multi-pass compiler with optimization. This is because that exercise doesn't require an AST! Multi-pass doesn't mean AST.
Building an AST doesn't require just more memory, but more code and development work: more complexity in the code to build the abstraction and to traverse it. It's useful if you need to manipulate or analyze the program in ways that are closely related to the source language. If the language cannot be checked in one pass, you might need one; you wouldn't want to be doing the checking on an intermediate representation, where you've lost the relationship to the code. AST building can be reused for other purposes, like code formatting, refactoring, communicating with an IDE for code completion and whatnot.
If the only thing you're going to do with an AST is walk it up and down to do some checks, and then to generate code, and you do all that in an order that could have been done without the AST (like a bottom-up, left to right traversal), then it was kind of a waste to construct it; those checks and generation could have been done as the phrase structure rules were parsed.
I once read that this is why the MSVC compiler didn't support two-pass template instantiation until very recently: the original compiler implemented templates almost like a macro that re-emitted a stream of tokens with the template parameters replaced.
I can't say if that was a design goal, but it sure looks like it. That's also the way to avoid scaling compiler memory use to program size.
At first I thought that it wasn't possible for C. After I thought about it, as long as you disallow forward references, and rely on a single source file as input, it's possible to compile a complete C program in one pass. Anything else requires a preprocessor (e.g "#include") and/or linker (e.g. "extern" and prototypes) to solve. The implementation in the article dodges all of these and focuses on a very pure subset of C.
I made similar project in TypeScript[1]. Basically multipass compiler that generates x86 assembly, compiles it to binary and runs it. The worst thing were register allocator, designing IR code and assembler.
Ooh, this is cool! Using WASM let me avoid writing a register allocator (though I probably would have just used the stack if I had targeted x86/ARM since I wasn't going for speed).
I am pretty certain the following is a valid "for"-loop translation:
block
;; code for "i = 0"
loop
;; code for "i < 5"
i32.eqz
br_if 1
i32.const 1
loop
if
;; code for "i = i + 1"
br 2
else
end
;; code for "j = j * 2 + 1"
i32.const 0
end
end
end
It doesn't require cloning the lexer so probably would still fit in 500 lines? But yeah, in normal assembly it's way easier, even in one-pass:
;; code for "i = 0"
.loop_test:
;; code for "i < 5"
jz .loop_end
jmp .loop_body
.loop_incr:
;; code for "i = i + 1"
jmp .loop_test
.loop_body:
;; code for "j = j * 2 + 1"
jmp .loop_incr
.loop_end:
Of course, normally you'd want to re-arrange things like so:
;; code for "i = 0"
jmp .loop_test
.loop_body:
;; code for "j = j * 2 + 1"
.loop_incr:
;; code for "i = i + 1"
.loop_test:
;; code for "i < 5"
jnz .loop_body
.loop_end:
I propose the better loop syntax for languages with one-pass implementations, then: "for (i = 0) { j = j * 2 + 1; } (i = i + 1; i < 5);" :)
Oh, interesting--I remember messing around with flags on the stack but was having issues with the WASM analyzer (it doesn't like possible inconsistencies with the number of parameters left on the stack between blocks). I think your solution might get around that, though!
Somewhat unrelated question, but I think one of the second most difficult things of learning C for coders who are used to scripting languages is to get your head around how the various scaler data types like short, int, long,... (and the unsigned/hex version of each) are represented and how they relate to each other and how they relate to the platform.
I am wondering if this complexity exists due to historical reasons, in other words if you were to invent C today you would just define int as always being 32, long as 64 and provide much more sane and well-defined rules on how the various datatypes relate to each other, without losing anything of what makes C a popular low-level language?
>if you were to invent C today you would just define int as always being 32, long as 64 and provide much more sane and well-defined rules on how the various datatypes relate to each other, without losing anything of what makes C a popular low-level language?
You'd lose something because those decisions would be impractical for 8-bit and 16-bit targets (which still exist in the world of embedded programming).
The int was supposed to be the native word size. So 16-bit on 286 and earlier, 32-bit on 386 and later, and 64-bit on x64. Except, of course, int has been 32 bits for so long on x86 (which has been the single most important ISA for just as long), and short was 16-bit for even longer that moving to 64-bit-wide int and 32-bit-wide short (which is what x64 naturally suited for) was just impossible, so it didn't happen, we're stuck with LP64 (on Linux) and LLP64 (on Windows) data models.
The simple version is that there are two use cases - the world where you want the size of types to match the target (e.g. int) and the world where sizes are defined by the coder (uint32_t). You want to handle both of those.
That's a nice theory and is what we've got, but it falls down in a few places.
The first is that the "int" world has got a bit munged - some platforms make some slightly strange choices for long and short and so you can't always rely on it (although int is usually pretty sensible).
The other is that when doing unsigned maths, rollover is silent so generally you really need to know the exact size at coding time so that you can ensure that rollover doesn't happen silently.
Together, these mean that you're generally just better using uint32_t (etc.) all over the place and you get more predictable results.
I learnt C about a decade ago (after using scriping languages 10 years prior) and just stuck with using the uint values, no second thoughts about how big a uint32_t is.
It is interesting to think that 500 lines of code is something one can write in one or two days. But, writing a C compiler in 500 of comprehensible code (even in python) is challenge in itself that may take months after a few years of solid learning.
I wonder if is this a good path to becoming an extremely productive developer. If some one spends time developing projects like this, but for different areas... A kernel, a compressor, renderer, multimedia/network stack, IA/ML... Will that turn a good dev into a 0.1 Bellard?
at the very least it'll remove a lot of 'magic' from programming. Today a lot of people seem to be not so fond of university education but I'm personally very glad it made me go through implementing a shell, a compiler, a little toy kernel and so on.
The feeling that you write code somewhere in the skies and have no idea how something works underneath has always really bugged me when I've used something.
It does remind me of a project [1] Andrej Karpathy did, writing a neural network and training code in ~600 lines (although networks have easier logic to code than a compiler).
> But, writing a C compiler in 500 of comprehensible code (even in python) is challenge in itself that may take months after a few years of solid learning.
The people behind this project avoided that caveat by simply not implementing C. Apparently they kept a bit of the syntax but then proceeded to cherry-pick features that suited them and not make.an effort to even try to comply with any version of the standard.
As an experienced developer who did not do a compilers course at university I was able to write a SQL/JSONPath evaluator in TypeScript in a week or so. I don’t expect a minimal C compiler would be that much more complex.
Essentially all you need is a grammar, parser library and a couple of tree walkers to convert the AST first to expand macros and then convert to assembly.
A production compiler with all its optimisation steps is of course far more complex and more modern languages have many more features, but C is really pretty simple (the K&R book is concise and good!) as it was built to work on the computers of half a century ago.
- demystifies compilers, interpreters, linkers/loaders and related systems software, which you now understand. This understanding will no doubt one day help in your debugging efforts;
- elevates you to become a higher level developer: you are now a tool smith who can make their own language if needed (e.g. to create domain specific languages embedded in larger systems you architect).
So congratulations, on top of other forms of abstraction, you have mastered meta-linguistic abstraction (see the latter part of Structure and Interpretation of Computer Programs, preferably the 1st or 2nd ed.).
I dunno. I did a compiler writing course once, writing a compiler for a subset of Pascal in Ada, generating a kind of quasi assembly. It was a team project. I did most of the codegen and static optimisation.
It was super fun and interesting. But I wouldn't say it was a terribly useful exercise that has greatly enriched me as a programmer.
And somehow I have ended up with a very strong bias against DSLs.
> [Building parse trees] is really great, good engineering, best practices, recommended by experts, etc. But... it takes too much code, so we can't do it.
It takes too much code in Python. (Not a phrase one gets to say often, but it’s generally true for tree processing code.) In, say, SML this sort of thing is wonderfully concise.
Oh, C4 is neat—technically it has me beat since it also implements the VM to run the code—though their formatting definitely takes advantage of long lines :-)
These kinds of posts are one of the things that keeps me coming back to HN. Right when I start thinking I'm a professional badass for implementing several features with great well tested code in record time, I stumble along posts like this that set me in my place.
This is crazy cool! Esolangs have been a hobby of mine, (more just an interest lately, since I haven't built any in a while,) so this is like a fun code golf game for compilation. Nice work, and even better, nice explanation article!
I am really confused by what people call compilers nowadays. This is now a compiler that takes input text and generates output text, which then gets read by a compiler that takes input text and generates JIT code for execution.
This is more of a transpiler, than an actual compiler.
To quote the great Bob Nystrom's Crafting Interpreters, "Compiling is an implementation technique that involves translating a source language to some other — usually lower-level — form. When you generate bytecode or machine code, you are compiling. When you transpile to another high-level language, you are compiling too."
Nowadays, people generally understand a compiler to be a program that reads, parses, and translates programs from one language to another. The fundamental structure of a machine code compiler and a WebAssembly compiler is virtually identical -- would this project somehow be more of a "real" compiler if instead of generating text it generated binary that encoded the exact same information? Would it become a "real" compiler if someone built a machine that runs on WebAssembly instead of running it virtually?
The popular opinion is that splitting hairs about this is useless, and the definition of a compiler has thus relaxed to include "transpilers" as well as machine code targeting compilers (at least in my dev circles).
> structs :-( would be possible with more code, the fundamentals were there, I just couldn't squeeze it in
> enums / unions
> preprocessor directives (this would probably be 500 lines by itself...)
> floating point. would also be possible, the wasm_type stuff is in, again just couldn't squeeze it in
> 8 byte types (long/long long or double)
> some other small things like pre/post cremements, in-place initialization, etc., which just didn't quite fit any sort of standard library or i/o that isn't returning an integer from main()
Well, I set the 500 line budget up front, and that was really as much as I could fit with reasonable formatting. I'll be excited to see your 500 line C compiler supporting all those features once it's done ;-)
brundolf|2 years ago
IIRC, C was specifically designed to allow single-pass compilation, right? I.e. in many languages you don't know what needs to be output without parsing the full AST, but in C, syntax directly implies semantics. I think I remember hearing this was because early computers couldn't necessarily fit the AST for an entire code file in memory at once
speps|2 years ago
It explains the memory limits and what happened :)
> After the TMG version of B was working, Thompson rewrote B in itself (a bootstrapping step). During development, he continually struggled against memory limitations: each language addition inflated the compiler so it could barely fit, but each rewrite taking advantage of the feature reduced its size. For example, B introduced generalized assignment operators, using x=+y to add y to x. The notation came from Algol 68 [Wijngaarden 75] via McIlroy, who had incorporated it into his version of TMG. (In B and early C, the operator was spelled =+ instead of += ; this mistake, repaired in 1976, was induced by a seductively easy way of handling the first form in B's lexical analyzer.)
WalterBright|2 years ago
vgel|2 years ago
Compare that to, say, Rust, which would be pretty painful to single-pass compile with all the non-local behavior around traits.
kazinator|2 years ago
E.g. to call a function later in the file, you need a prior declaration. Or else, an implicit one (possibly wrong) will be assumed from the call itself.
This is not true in some C++ situations, like class declarations. In a class definition there can be functions with bodies. Those are inline functions. The functons can freely refer to each other in either direction. Type checking a class declaration therefore requires all of it to be parsed.
A one pass language is advantageous even if you're building a serious multi-pass compiler with optimization. This is because that exercise doesn't require an AST! Multi-pass doesn't mean AST.
Building an AST doesn't require just more memory, but more code and development work: more complexity in the code to build the abstraction and to traverse it. It's useful if you need to manipulate or analyze the program in ways that are closely related to the source language. If the language cannot be checked in one pass, you might need one; you wouldn't want to be doing the checking on an intermediate representation, where you've lost the relationship to the code. AST building can be reused for other purposes, like code formatting, refactoring, communicating with an IDE for code completion and whatnot.
If the only thing you're going to do with an AST is walk it up and down to do some checks, and then to generate code, and you do all that in an order that could have been done without the AST (like a bottom-up, left to right traversal), then it was kind of a waste to construct it; those checks and generation could have been done as the phrase structure rules were parsed.
frutiger|2 years ago
pragma_x|2 years ago
At first I thought that it wasn't possible for C. After I thought about it, as long as you disallow forward references, and rely on a single source file as input, it's possible to compile a complete C program in one pass. Anything else requires a preprocessor (e.g "#include") and/or linker (e.g. "extern" and prototypes) to solve. The implementation in the article dodges all of these and focuses on a very pure subset of C.
zabzonk|2 years ago
mati365|2 years ago
[1] https://github.com/Mati365/ts-c-compiler
vgel|2 years ago
amedvednikov|2 years ago
Joker_vD|2 years ago
vgel|2 years ago
tptacek|2 years ago
https://www.blackhat.com/presentations/win-usa-04/bh-win-04-...
(minus directly emitting opcodes, and fitting into 500 lines, of course.)
ak_111|2 years ago
I am wondering if this complexity exists due to historical reasons, in other words if you were to invent C today you would just define int as always being 32, long as 64 and provide much more sane and well-defined rules on how the various datatypes relate to each other, without losing anything of what makes C a popular low-level language?
foldr|2 years ago
You'd lose something because those decisions would be impractical for 8-bit and 16-bit targets (which still exist in the world of embedded programming).
Joker_vD|2 years ago
rkangel|2 years ago
That's a nice theory and is what we've got, but it falls down in a few places.
The first is that the "int" world has got a bit munged - some platforms make some slightly strange choices for long and short and so you can't always rely on it (although int is usually pretty sensible).
The other is that when doing unsigned maths, rollover is silent so generally you really need to know the exact size at coding time so that you can ensure that rollover doesn't happen silently.
Together, these mean that you're generally just better using uint32_t (etc.) all over the place and you get more predictable results.
ricardo81|2 years ago
PartiallyTyped|2 years ago
kaycebasques|2 years ago
vgel|2 years ago
IMO, being under X lines of code is part of the readability—10,000 lines of code is hard to approach no matter how readable it otherwise is.
muth02446|2 years ago
http://cwerg.org
WalterBright|2 years ago
http://www.trs-80.org/tiny-pascal/
I figured out the basics of how a compiler works by going through it line by line.
vgel|2 years ago
I didn't see a link to the source in the article, but this seems to be it: https://sourceforge.net/p/tiny-pascal/code/HEAD/tree/NorthSt...
dugmartin|2 years ago
andrewmcwatters|2 years ago
unknown|2 years ago
[deleted]
marcodiego|2 years ago
I wonder if is this a good path to becoming an extremely productive developer. If some one spends time developing projects like this, but for different areas... A kernel, a compressor, renderer, multimedia/network stack, IA/ML... Will that turn a good dev into a 0.1 Bellard?
pitherpather|2 years ago
Off topic, but a log scale might be useful: 0.1 Bellard --> -10 deciBellards. That allows for: 0.001 Bellard --> -30 deciBellards.
Problem: Programmers with negative productivity cannot be represented on the same log scale.
Barrin92|2 years ago
The feeling that you write code somewhere in the skies and have no idea how something works underneath has always really bugged me when I've used something.
sciolist|2 years ago
[1] https://github.com/karpathy/nanoGPT
rewmie|2 years ago
The people behind this project avoided that caveat by simply not implementing C. Apparently they kept a bit of the syntax but then proceeded to cherry-pick features that suited them and not make.an effort to even try to comply with any version of the standard.
laurencerowe|2 years ago
Essentially all you need is a grammar, parser library and a couple of tree walkers to convert the AST first to expand macros and then convert to assembly.
A production compiler with all its optimisation steps is of course far more complex and more modern languages have many more features, but C is really pretty simple (the K&R book is concise and good!) as it was built to work on the computers of half a century ago.
iudqnolq|2 years ago
https://www.amazon.com/500-Lines-Less-Amy-Brown/dp/132987127...
unknown|2 years ago
[deleted]
unknown|2 years ago
[deleted]
jll29|2 years ago
- demystifies compilers, interpreters, linkers/loaders and related systems software, which you now understand. This understanding will no doubt one day help in your debugging efforts;
- elevates you to become a higher level developer: you are now a tool smith who can make their own language if needed (e.g. to create domain specific languages embedded in larger systems you architect).
So congratulations, on top of other forms of abstraction, you have mastered meta-linguistic abstraction (see the latter part of Structure and Interpretation of Computer Programs, preferably the 1st or 2nd ed.).
stevage|2 years ago
It was super fun and interesting. But I wouldn't say it was a terribly useful exercise that has greatly enriched me as a programmer.
And somehow I have ended up with a very strong bias against DSLs.
glouwbug|2 years ago
mighmi|2 years ago
unknown|2 years ago
[deleted]
mananaysiempre|2 years ago
It takes too much code in Python. (Not a phrase one gets to say often, but it’s generally true for tree processing code.) In, say, SML this sort of thing is wonderfully concise.
meitham|2 years ago
bfLives|2 years ago
nn3|2 years ago
C4x86 | 0.6K (very close)
small C (x86) | 3.1K
Ritchie's earliest struct compiler | 2.3K
v7 Unix C compiler | 10.2K
chibicc | 8.4K
Biederman's romcc | 25.0K
userbinator|2 years ago
vgel|2 years ago
unknown|2 years ago
[deleted]
Shocka1|2 years ago
rcarmo|2 years ago
vgel|2 years ago
cnity|2 years ago
https://gitlab.com/spritely/guile-hoot
aldousd666|2 years ago
varispeed|2 years ago
jokoon|2 years ago
MrYellowP|2 years ago
This is more of a transpiler, than an actual compiler.
Am I missing something?
traes|2 years ago
Nowadays, people generally understand a compiler to be a program that reads, parses, and translates programs from one language to another. The fundamental structure of a machine code compiler and a WebAssembly compiler is virtually identical -- would this project somehow be more of a "real" compiler if instead of generating text it generated binary that encoded the exact same information? Would it become a "real" compiler if someone built a machine that runs on WebAssembly instead of running it virtually?
The popular opinion is that splitting hairs about this is useless, and the definition of a compiler has thus relaxed to include "transpilers" as well as machine code targeting compilers (at least in my dev circles).
teddyh|2 years ago
> Notably, it doesn't support:
> structs :-( would be possible with more code, the fundamentals were there, I just couldn't squeeze it in
> enums / unions
> preprocessor directives (this would probably be 500 lines by itself...)
> floating point. would also be possible, the wasm_type stuff is in, again just couldn't squeeze it in
> 8 byte types (long/long long or double)
> some other small things like pre/post cremements, in-place initialization, etc., which just didn't quite fit any sort of standard library or i/o that isn't returning an integer from main()
> casting expressions
vgel|2 years ago
spease|2 years ago
(Respect to the author for doing this, I just couldn’t resist the obvious joke)
pjmlp|2 years ago
RatC did not need 500 lines for its preprocessor support, by the way.
unknown|2 years ago
[deleted]
fan_of_yoinked|2 years ago
unknown|2 years ago
[deleted]
moomin|2 years ago
unknown|2 years ago
[deleted]
hamilyon2|2 years ago
pyinstallwoes|2 years ago
unknown|2 years ago
[deleted]
rhabarba|2 years ago
MaxBarraclough|2 years ago
https://root.cern.ch/root/html534/guides/users-guide/CINT.ht...
wiseowise|2 years ago
folmar|2 years ago
unknown|2 years ago
[deleted]
ForOldHack|2 years ago
HumblyTossed|2 years ago
unknown|2 years ago
[deleted]
Uptrenda|2 years ago
[deleted]
Gibbon1|2 years ago
unknown|2 years ago
[deleted]
Jake_K|2 years ago
golemarms|2 years ago
_chu1|2 years ago