top | item 21626972

Writing An Interpreter In Go (2016)

305 points| tambourine_man | 6 years ago |interpreterbook.com

60 comments

order
[+] stevekemp|6 years ago|reply
I had a lot of fun with this book, after belatedly buying it. I've read a couple of similar online-books in the past, but this was the first one I worked my way through completely.

As a result I later had a lot of fun writing a simple compiler for a reverse-polish calculator, to generate AMD64 assembly language https://github.com/skx/math-compiler/ , wrote a simple BASIC intepreter https://github.com/skx/gobasic , and put together a couple of DSL-projects which followed a similar approach.

All in all I'd recommend reading it, and definitely working through the code. Once you're done you can continue to experiment - adding support for regular expressions, a standard-library, etc, etc.

My version of the language is here and contains a fair number of improvements/additions. After covering the core in such detail it wasn't hard to extend further, even if that was primarily for fun, rather than for a real purpose:

https://github.com/skx/monkey/

[+] schnitzelstoat|6 years ago|reply
Can anyone recommend other books similar to this one?

Having done the Nand2Tetris course and started the Ray Tracer Challenge book I find I really like these books that guide you through a pretty complex project.

It helps you learn by doing while at the same time preventing you from falling into bad practices or getting overwhelmed.

[+] misternugget|6 years ago|reply
I haven't worked through it (yet!), but read parts of and only heard good things about Bob Nystrom's Crafting Interpreters [0]: http://craftinginterpreters.com/

If you like Scheme/Racket, I can also recommend Beautiful Racket [1]. That was quite a dose for my macro-loving brain.

Then I also recommend this "Let's Build A Compiler" series of blogposts [2] that roughly follows Abdulaziz Ghuloum's relatively famous (amongst fellow compiler fans) paper "An Incremental Approach to Compiler Construction" [3]. I've followed that series and paper for the past three months and built a Scheme to x86 compiler in Scheme. That was a lot of fun!

[0]: http://craftinginterpreters.com/ [1]: https://beautifulracket.com/ [2]: https://generalproblem.net/lets_build_a_compiler/01-starting... [3]: http://lambda-the-ultimate.org/node/1752

[+] cbzbc|6 years ago|reply
There's also https://en.wikibooks.org/wiki/Write_Yourself_a_Scheme_in_48_...

Learning Haskell via writing a Scheme interpreter.

As you say you've been through a couple of these books/guides before, I just wondered how you would characterise your learning experience, in terms of how much you think you've learned, whether you would have learned the same via other means and so on?

[+] azhenley|6 years ago|reply
Highly recommend this book. After reading it (and the compiler book by the same author), I started my own compiler for Knox, a Go-like language. It has been a fun project, but I made some bad decisions regarding the type checker and how I store the AST, so I'm currently procrastinating fixing those. You can check out the source here:

https://github.com/AZHenley/knox/

[+] voidhorse|6 years ago|reply
What timing. I just started implementing an interpreter in go this past weekend. I’d been following Bob Nystrom’s https://www.craftinginterpreters.com, translating java into go as well as comparing my impl to the go package scanner, parser etc. (https://golang.org/pkg/go/) It’ll be very interesting to compare my own approaches with Thornsten Ball’s to see if my use of idioms and intutions about certain things were on the right track.
[+] lghh|6 years ago|reply
I'm about 1/4 of the way through it. I enjoy it so far. My only criticism is that I wish it had something in the way of exercises. It's a lot of copying (typing) code and it explaining what the code is doing and the concepts behind what you've written. That's a good approach, but I enjoy thinking through things on my own a little as well then being told how the author would have done it and why. Otherwise very good!
[+] tiuPapa|6 years ago|reply
Lol, ebooks should have regional pricing like games. Buying these books means I will have to shell out 1/3rd of my total monthly income (student in a third-world country). Makes me wonder if there is a market for a paid service that can make copies of books available for a monthly rate (kind of like a paid ebook library with region locking and special format that can only be read by the app with a constant internet connection - Netflix for books if you will ). But I guess it would be hard to do because its just texts and if there was money in this, Amazon would have done it already.
[+] throwaway40324|6 years ago|reply
Pick up a free trial at safaribooksonline / oreilly. Perhaps someone or your institution would sponsor a paid 50 usd per month account.
[+] Denzel|6 years ago|reply
The author is very friendly and responsive. I'm sure he'll respond to your concerns if you reach out to him.
[+] ojosilva|6 years ago|reply
I find it interesting that the Monkey language, created solely for the examples implemented in this book, looks strikingly similar to a simplified EcmaScript (ES2015+ actually). Is ES/JS the best example of a parseable language? Why not just shoot for a (subset) ES interpreter instead of a made up language that does not resonate as much with the reader?
[+] skybrian|6 years ago|reply
Using a fantasy language lets you avoid anything that makes the implementation complicated or tedious.
[+] reubensutton|6 years ago|reply
I worked through this soon after it came out and I enjoyed it a great deal
[+] giancarlostoro|6 years ago|reply
Have you read other books with similar goals? How does this compare? I've been meaning to go through and write an interpreter / compiler but havent found time / right resources, but this seems interesting cause I've grown to love Go more lately, and how you get so much out of the box. I might just try it, but I'm curious what someone else's experience might be who was in my shoes when they read it.
[+] mindv0rtex|6 years ago|reply
As someone who just recently started learning Rust, I'd love to see a book like this that uses it as the implementation language instead.
[+] TeeWEE|6 years ago|reply
I'm curious about this chapter:

"Why not a parser generator"

Seemse like the way to go these days? Why write it yourselve if you can generate it from EBNF notation?

[+] misternugget|6 years ago|reply
Author here. I'm going to be super shameless here and quote myself in that chapter:

> [...] we are here to learn, we want to understand how parsers work. And it’s my opinion that the best way to do that is by getting our hands dirty and writing a parser ourselves. Also, I think it’s immense fun.

In other words: use a parser generator when you need a working parser, quick. But if you want to learn how parsers work, what ASTs are, what "top-down parsing" means, etc. then I recommend you write your own. It's not that hard once the concept "clicked" and, again, it's a ton of fun :)

[+] spinningslate|6 years ago|reply
He discussed that on the Corecursive podcast episode [0].

> Why not a parser generator

IIRC it was about "minimising magic" - showing what's actually involved in native code. Bob Nystrom makes the same decision in "Crafting Interpreters [1]. Quoting [2]:

"Many other language books and language implementations use tools like Lex and Yacc, “compiler-compilers” to automatically generate some of the source files for an implementation from some higher level description. There are pros and cons to tools like those, and strong opinions—some might say religious convictions—on both sides.

We will abstain from using them here. I want to ensure there are no dark corners where magic and confusion can hide, so we’ll write everything by hand. As you’ll see, it’s not as bad as it sounds and it means you really will understand each line of code and how both interpreters work."

[0] https://corecursive.com/037-thorsten-ball-compilers/

[1] https://craftinginterpreters.com/

[2] https://craftinginterpreters.com/introduction.html#the-code

[+] azhenley|6 years ago|reply
One of the C# developers commented [1] on HN that they use a hand written parser for three reasons: incremental re-parsing, better error reporting, and resilient parsing (i.e., you still get a syntax tree even if something doesn't parse).

[1] https://news.ycombinator.com/item?id=13915150

[+] wyufro|6 years ago|reply
In my experience the hardest part of using parser generators is to have them create understandable errors.

Also it's questionable that they add very much. Parsing text isn't all that difficult. The hard part is to get the tree structure of the AST right.

All in all the parsing is a quite small part of a compiler. I've converted from a parser generated to manual code in two different projects and they actually ended up with less lines of code after the conversion, if just barely.

[+] chrisseaton|6 years ago|reply
> Seems like the way to go these days?

No I think people use parser generators less these days.

> Why write it yourselves if you can generate it from EBNF notation?

Can you write EBNF for it? Can you generate a parser from the EBNF? EBNF and most parser generators are designed for context-free languages. Most languages are context-sensitive. You can see where things start to go wrong...

[+] MrBuddyCasino|6 years ago|reply
If you are super familiar with EBNF notation, it may be the fastest way. Otherwise I like parser combinators for their simplicity, you just need to understand the host language.
[+] azhenley|6 years ago|reply
The C# and Go parsers do not use a parser generator, iirc.
[+] towndrunk|6 years ago|reply
I really enjoyed this book. In fact, after reading through it I wrote some code to tokenize TypeScript with TypeScript. It was a fun little project even thought I didn't really go all the way with it.
[+] workthrowaway|6 years ago|reply
funny, i was listening to the go time podcast during my commute this morning and they had the author on...

the author seems to suggest that one should start with an interpreter, then move to a compiler. (read the interpreter book, then read the compiler book...)

i think it really depends on what you are doing. in fact, python is a good example. it compiles to bytecode that is interpreted. it has the both compiling thing going on and then the whole VM thing. i believe java is similar. what i am trying to say is that the line is very blurry nowadays.

[+] jonathanstrange|6 years ago|reply
I'm not an expert, just a hobby language designer, but it seems to me that the advantage of starting with an interpreter first is that this allows you to more easily design a language in which compile-time expressions have some reasonable expressivity. Generally, an interpreter allows the designer to experiment more with the semantics. Typical examples are evaluation of expressions for constant definitions at compile time or having built-in access and language support (e.g. syntactic sugar) to 3rd party libraries like arbitrary precision arithmetic.

The disadvantage is that it can be a lot of work. From my experience, writing a toy VM in Ada didn't seem much easier than writing a compiler in a toolkit like LLVM. (But I've never done the latter.)

[+] tom_mellior|6 years ago|reply
Compiling to a bytecode like Python or javac is very different from compiling to machine code like GCC or the JVM.

I think a progression like AST interpreter -> bytecode compiler and interpreter -> machine-code compiler is a reasonable way to go. Starting with AST -> machine code is possible of course (that's what university compiler courses often do), but if you are explicitly interested in both compilers and interpreters, doing interpreters first makes more sense to me. It's a steady progression from higher to lower level.

[+] vmchale|6 years ago|reply

[deleted]

[+] segmondy|6 years ago|reply
Most languages are. IMHO, the best language for writing a compiler or interpreter is Prolog.