This is entirely tangential to the article, but I’ve been coding in golang now going on 5 years.
For four of those years, I was a reluctant user. In the last year I’ve grown to love golang for backend web work.
I find it to be one of the most bulletproof languages for agentic coding. I have a two main hypotheses as to why:
- very solid corpus of well-written code for training data. Compare this to vanilla js or php - I find agents do a very poor job with both of these due to what I suspect is poorly written code that it’s been trained on.
- extremely self documenting, due to structs giving agents really solid context on what the shape of the data is
In any file an agent is making edits in, it has all the context it needs in the file, and it has training data that shows how to edit it with great best practices.
My main gripe with go used to be that it was overly verbose, but now I actually find that to be a benefit as it greatly helps agents. Would recommend trying it out for your next project if you haven’t given it a spin.
Interesting. I've only dipped my toe in the AI waters but my initial experience with a Go project wasn't good.
I tried out the latest Claude model last weekend. As a test I asked it to identify areas for performance improvement in one of my projects. One of the areas looked significant and truth be told, was an area I expected to see in the list.
I asked it to implement the fix. It was a dozen or so lines and I could see straightaway that it had introduced a race condition. I tested it and sure enough, there was a race condition.
I told it about the problem and it suggested a further fix that didn't solve the race condition at all. In fact, the second fix only tried to hide the problem.
I don't doubt you can use these tools well, but it's far too easy to use them poorly. There are no guard rails. I also believe that they are marketed without any care that they can be used poorly.
Whether Go is a better language for agentic programming or not, I don't know. But it may be to do with what the language is being used for. My example was a desktop GUI application and there'll be far fewer examples of those types of application written in Go.
Go’s design philosophy actually aligns with AI’s current limitations very well.
AI has trouble with deep complexity, go is simple by design. With usually only one or two correct paths instruction wise. Architecturally you can design your src however but there’s a pretty well established standard.
I have been extending the Elm language with Effect semantics (ala ZIO/Rio/Effect-ts) for a new langauge called Eelm (extended-Elm or effectful-elm) and both Haskell (the language that the Elm compiler is written in) and Eelm (the target language, now we some new fancy capabilities) shouldn't have a particularly relevant corpus of code.
Yet, my experiments show that Opus 4.6 is terrific at understanding and authoring both Haskell and Eelm.
Why? I think it stems from the properties of these languages themselves: no mutability makes it reason to think about, fully statically typed, excellent compiler and diagnostics. On top of that the syntax is rather small.
One of the things that makes it work so well with agents is two facts. Go is a language that is focused on simplicity and also the gofmt and go coding style makes that almost all go code looks familiar, because everyone write the code with a very consistent style. That two things makes the experience pleasant and the work for the llm easier.
I have had good experience with Go, but I've also had good results with TypeScript. Compile-time checks are very important to getting good results. I don't think the simplicity of the language matters as much as the LLM being generally aware of the language via training data and being able to validate the output via compilation.
I wonder how is the experience writing Rust or Zig with LLMs. I suspect zig might not have enough training data and rust might struggle with compile times and extra context required for borrow checker.
Yes, it is not specially different from other linkers. It has some tasks building the final binary including special sections in the binary, and is more aware about the specifics of the go language. But there is nothing that is extremely different from other linkers. The whole point of the series is to explain a real compiler, but in general, most of the parts of the go compiler are very widely used in other languages, like ssa, ast, escape analysis, inlining...
The difference is that Go has its own linker rather than using a system linker. Another article could explain the benefits of tighter integration and the drawbacks of this approach. Having its own toolchain I assume is part of what enables the easy cross compilation of Go.
I always have the unfounded feeling that the go compiler/linker does not remove dead code. Go binaries have large minimal size. Tinygo in contrast can make awesome small binaries
It's pretty good at dead code elimination. The size of Go binaries is in large part because of the runtime implementation. Remove a bunch of the runtime's features (profiling, stacktraces, sysmon, optimizations that avoid allocations, maybe even multithreading...) and you'll end up with much smaller binaries. I would love if there was a build tag like "runtime_tiny", that provides such an implementation.
I think it depends on the codebase. There are some reflection calls that you can make that can cause dead code elimination to fail, thought I believe it's less easy to run into than it was a few years ago. One common dependency, at least in my line of work, is the Kubernetes API and it manages to both be gigantic and trigger this edge case (last I looked), so yeah, the binaries end up pretty big.
Another thing that people run into is big binaries = slow container startup times. This time is mostly spent in gzip. If you use Zstandard layers instead of gzip layers, startup time is improved. gzip decompression is actually very slow, and the OCI spec no longer mandates it.
Go has a runtime. That alone is over a megabyte. Tinygo on the other hand has very limited(smaller) runtime. In other words, you don't know what you're talking about.
It's always fascinating to dive into the internals of the Go linker. One aspect I've found particularly clever is how it handles static linking by default, bundling everything into a single binary
The Go tooling is heavily based on the Inferno tool chain which was based off the highly portable Plan 9 tool chain. Plan 9 by default is statically linked as dynamic libraries are supported but not implemented anywhere. The idea was that librarys should instead be implemented as a service that runs local or on a remote machine.
In fact it generate single optimized exe files, but it does in multiple steps for multiple reasons, one of them is separation of concerns, but also, one of the main reasons is speed. The linker is linking (normally statically linking) different already build cached libraries, including the runtime. Without the linking ability, you would need to compile everything every time. Not only that, the linker has other responsibilities, like building some metadata that goes into the binary, for example the dynamic dispatch table.
jjcm|16 days ago
For four of those years, I was a reluctant user. In the last year I’ve grown to love golang for backend web work.
I find it to be one of the most bulletproof languages for agentic coding. I have a two main hypotheses as to why:
- very solid corpus of well-written code for training data. Compare this to vanilla js or php - I find agents do a very poor job with both of these due to what I suspect is poorly written code that it’s been trained on. - extremely self documenting, due to structs giving agents really solid context on what the shape of the data is
In any file an agent is making edits in, it has all the context it needs in the file, and it has training data that shows how to edit it with great best practices.
My main gripe with go used to be that it was overly verbose, but now I actually find that to be a benefit as it greatly helps agents. Would recommend trying it out for your next project if you haven’t given it a spin.
JetSetIlly|16 days ago
I tried out the latest Claude model last weekend. As a test I asked it to identify areas for performance improvement in one of my projects. One of the areas looked significant and truth be told, was an area I expected to see in the list.
I asked it to implement the fix. It was a dozen or so lines and I could see straightaway that it had introduced a race condition. I tested it and sure enough, there was a race condition.
I told it about the problem and it suggested a further fix that didn't solve the race condition at all. In fact, the second fix only tried to hide the problem.
I don't doubt you can use these tools well, but it's far too easy to use them poorly. There are no guard rails. I also believe that they are marketed without any care that they can be used poorly.
Whether Go is a better language for agentic programming or not, I don't know. But it may be to do with what the language is being used for. My example was a desktop GUI application and there'll be far fewer examples of those types of application written in Go.
reactordev|16 days ago
AI has trouble with deep complexity, go is simple by design. With usually only one or two correct paths instruction wise. Architecturally you can design your src however but there’s a pretty well established standard.
epolanski|15 days ago
I have been extending the Elm language with Effect semantics (ala ZIO/Rio/Effect-ts) for a new langauge called Eelm (extended-Elm or effectful-elm) and both Haskell (the language that the Elm compiler is written in) and Eelm (the target language, now we some new fancy capabilities) shouldn't have a particularly relevant corpus of code.
Yet, my experiments show that Opus 4.6 is terrific at understanding and authoring both Haskell and Eelm.
Why? I think it stems from the properties of these languages themselves: no mutability makes it reason to think about, fully statically typed, excellent compiler and diagnostics. On top of that the syntax is rather small.
jespino|16 days ago
hippo22|15 days ago
oncallthrow|16 days ago
tejinderss|16 days ago
dizhn|16 days ago
IhateAI_2|16 days ago
[deleted]
KingOfCoders|16 days ago
But if I could have a little wish, "cargo check" would be it.
12345hn6789|15 days ago
Going on year 4 working at $DAY_JOB and just last week we had a case where enums and also union types would have made things simpler.
Surac|16 days ago
jespino|16 days ago
gregwebs|16 days ago
jenoer|16 days ago
pjmlp|16 days ago
cloudhead|16 days ago
hbogert|16 days ago
clktmr|16 days ago
jrockway|15 days ago
Another thing that people run into is big binaries = slow container startup times. This time is mostly spent in gzip. If you use Zstandard layers instead of gzip layers, startup time is improved. gzip decompression is actually very slow, and the OCI spec no longer mandates it.
gethly|16 days ago
vlinx|16 days ago
MisterTea|15 days ago
piinbinary|15 days ago
yuritomanek|15 days ago
high_na_euv|15 days ago
jespino|15 days ago