top | item 46936671

Understanding the Go Compiler: The Linker

171 points| valyala | 21 days ago |internals-for-interns.com

48 comments

order

jjcm|16 days ago

This is entirely tangential to the article, but I’ve been coding in golang now going on 5 years.

For four of those years, I was a reluctant user. In the last year I’ve grown to love golang for backend web work.

I find it to be one of the most bulletproof languages for agentic coding. I have a two main hypotheses as to why:

- very solid corpus of well-written code for training data. Compare this to vanilla js or php - I find agents do a very poor job with both of these due to what I suspect is poorly written code that it’s been trained on. - extremely self documenting, due to structs giving agents really solid context on what the shape of the data is

In any file an agent is making edits in, it has all the context it needs in the file, and it has training data that shows how to edit it with great best practices.

My main gripe with go used to be that it was overly verbose, but now I actually find that to be a benefit as it greatly helps agents. Would recommend trying it out for your next project if you haven’t given it a spin.

JetSetIlly|16 days ago

Interesting. I've only dipped my toe in the AI waters but my initial experience with a Go project wasn't good.

I tried out the latest Claude model last weekend. As a test I asked it to identify areas for performance improvement in one of my projects. One of the areas looked significant and truth be told, was an area I expected to see in the list.

I asked it to implement the fix. It was a dozen or so lines and I could see straightaway that it had introduced a race condition. I tested it and sure enough, there was a race condition.

I told it about the problem and it suggested a further fix that didn't solve the race condition at all. In fact, the second fix only tried to hide the problem.

I don't doubt you can use these tools well, but it's far too easy to use them poorly. There are no guard rails. I also believe that they are marketed without any care that they can be used poorly.

Whether Go is a better language for agentic programming or not, I don't know. But it may be to do with what the language is being used for. My example was a desktop GUI application and there'll be far fewer examples of those types of application written in Go.

reactordev|16 days ago

Go’s design philosophy actually aligns with AI’s current limitations very well.

AI has trouble with deep complexity, go is simple by design. With usually only one or two correct paths instruction wise. Architecturally you can design your src however but there’s a pretty well established standard.

epolanski|15 days ago

I don't believe the "corpus" argument that much.

I have been extending the Elm language with Effect semantics (ala ZIO/Rio/Effect-ts) for a new langauge called Eelm (extended-Elm or effectful-elm) and both Haskell (the language that the Elm compiler is written in) and Eelm (the target language, now we some new fancy capabilities) shouldn't have a particularly relevant corpus of code.

Yet, my experiments show that Opus 4.6 is terrific at understanding and authoring both Haskell and Eelm.

Why? I think it stems from the properties of these languages themselves: no mutability makes it reason to think about, fully statically typed, excellent compiler and diagnostics. On top of that the syntax is rather small.

jespino|16 days ago

One of the things that makes it work so well with agents is two facts. Go is a language that is focused on simplicity and also the gofmt and go coding style makes that almost all go code looks familiar, because everyone write the code with a very consistent style. That two things makes the experience pleasant and the work for the llm easier.

hippo22|15 days ago

I have had good experience with Go, but I've also had good results with TypeScript. Compile-time checks are very important to getting good results. I don't think the simplicity of the language matters as much as the LLM being generally aware of the language via training data and being able to validate the output via compilation.

oncallthrow|16 days ago

Yeah in my experience Claude is significantly better at writing go than other languages I’ve tried (Python, typescript)

tejinderss|16 days ago

I wonder how is the experience writing Rust or Zig with LLMs. I suspect zig might not have enough training data and rust might struggle with compile times and extra context required for borrow checker.

dizhn|16 days ago

I'm having similarly good results with go and agents. Another good language for it is flutter/dart in my experience.

KingOfCoders|16 days ago

Perfectly happy with Go, my "Go should do X" / "Go should have Y" days are over.

But if I could have a little wish, "cargo check" would be it.

12345hn6789|15 days ago

Enums is mine.

Going on year 4 working at $DAY_JOB and just last week we had a case where enums and also union types would have made things simpler.

Surac|16 days ago

I can see no difference to an ordinary linker. Anyone care to explain it to me.?

jespino|16 days ago

Yes, it is not specially different from other linkers. It has some tasks building the final binary including special sections in the binary, and is more aware about the specifics of the go language. But there is nothing that is extremely different from other linkers. The whole point of the series is to explain a real compiler, but in general, most of the parts of the go compiler are very widely used in other languages, like ssa, ast, escape analysis, inlining...

gregwebs|16 days ago

The difference is that Go has its own linker rather than using a system linker. Another article could explain the benefits of tighter integration and the drawbacks of this approach. Having its own toolchain I assume is part of what enables the easy cross compilation of Go.

jenoer|16 days ago

What is there to explain? The author did not claim there is a difference in the article.

pjmlp|16 days ago

Why should it be one?

cloudhead|16 days ago

The title is misleading

hbogert|16 days ago

I always have the unfounded feeling that the go compiler/linker does not remove dead code. Go binaries have large minimal size. Tinygo in contrast can make awesome small binaries

clktmr|16 days ago

It's pretty good at dead code elimination. The size of Go binaries is in large part because of the runtime implementation. Remove a bunch of the runtime's features (profiling, stacktraces, sysmon, optimizations that avoid allocations, maybe even multithreading...) and you'll end up with much smaller binaries. I would love if there was a build tag like "runtime_tiny", that provides such an implementation.

jrockway|15 days ago

I think it depends on the codebase. There are some reflection calls that you can make that can cause dead code elimination to fail, thought I believe it's less easy to run into than it was a few years ago. One common dependency, at least in my line of work, is the Kubernetes API and it manages to both be gigantic and trigger this edge case (last I looked), so yeah, the binaries end up pretty big.

Another thing that people run into is big binaries = slow container startup times. This time is mostly spent in gzip. If you use Zstandard layers instead of gzip layers, startup time is improved. gzip decompression is actually very slow, and the OCI spec no longer mandates it.

gethly|16 days ago

Go has a runtime. That alone is over a megabyte. Tinygo on the other hand has very limited(smaller) runtime. In other words, you don't know what you're talking about.

vlinx|16 days ago

It's always fascinating to dive into the internals of the Go linker. One aspect I've found particularly clever is how it handles static linking by default, bundling everything into a single binary

MisterTea|15 days ago

The Go tooling is heavily based on the Inferno tool chain which was based off the highly portable Plan 9 tool chain. Plan 9 by default is statically linked as dynamic libraries are supported but not implemented anywhere. The idea was that librarys should instead be implemented as a service that runs local or on a remote machine.

piinbinary|15 days ago

I'm impressed with how approachable the explanation is!

yuritomanek|15 days ago

I don't use Go as much as I probably should but when I do I thoroughly enjoy it.

high_na_euv|15 days ago

Why not skip linker at all and generate single optimized exe file?

jespino|15 days ago

In fact it generate single optimized exe files, but it does in multiple steps for multiple reasons, one of them is separation of concerns, but also, one of the main reasons is speed. The linker is linking (normally statically linking) different already build cached libraries, including the runtime. Without the linking ability, you would need to compile everything every time. Not only that, the linker has other responsibilities, like building some metadata that goes into the binary, for example the dynamic dispatch table.