top | item 35260401

The simplicity of single-file Golang deployments

189 points| KingOfCoders | 3 years ago |amazingcto.com | reply

225 comments

order
[+] drewg123|3 years ago|reply
The fact that it produces a single static binary is one of the nicest things about golang.

This used to be easy with C (on BSD & Linux) a long time ago, but then everything started to depend on various shared libs, who then depend on other libs, then things started to even dlopen libs behind your back so they didn't even show up in ldd, etc. Sigh.

[+] quaintdev|3 years ago|reply
> The fact that it produces a single static binary is one of the nicest things about golang.

Not only that it can also cross compile for different architecture and operating systems.

[+] candiddevmike|3 years ago|reply
From what I have seen in major OSS projects like systemd and PostgreSQL, nothing seems to support static linking, to the point where some contributors get annoyed when you ask for it.

Seems like the C/C++ ecosystem will stay dynamically linked, even with a lot of the industry shifting towards statically linked, fat binaries as disk space is pretty cheap.

I wonder how much simpler Linux packaging would be if everything was statically linked...

[+] 1vuio0pswjnm7|3 years ago|reply
IME, using musl, compiling static binaries written in C is as easy as it was before glibc changes and as it has always has been on NetBSD. I compile static binaries written in C every day on Linux. I never encountered any problems compiling static binaries on BSD.
[+] synergy20|3 years ago|reply
very true, for Go though if you need CGO it's hard to make a single executable, otherwise it is great.

I run a few Go apps, all are single executables, upgrading to new releases has never been easier.

if you have a network oriented application, nothing beats golang as far as release|maintenance is concerned.

[+] caeril|3 years ago|reply
This is generally only true with CGO_ENABLED=0.

I've found many times that a go binary, even if it has no cgo or cgo dependencies, will randomly require glibc on the target system to execute if you don't explicitly disable cgo in your build.

[+] shp0ngle|3 years ago|reply
You can still do that with musl, can’t you? Can make a static c binary with that
[+] masto|3 years ago|reply
When I was stuck doing a web application in Java 15 years ago, I hated everything about it except for the deployment story, which boiled down to a single .war file being pushed to the server.

When we upgraded to Perl, I liked that system so we designed deployment around "PAR" files in a similar way, bundling all of the dependencies together with the application in the CI build process, and I wrote a tiny bit of infrastructure that essentially moved a symlink to make the new version live.

Google uses MPM and hermetic packaging of configuration with binaries: https://sre.google/sre-book/release-engineering/#packaging-8...

The way I see it, Docker is basically this same thing, generalized to be independent of the language/application/platform. As a practical matter, it still fundamentally has the "one file" nature.

I don't see what's special or better about compiling everything into a single binary, apart from fetishizing the executable format. In any system at scale, you still have to solve for the more important problems of managing the infrastructure. "I can deploy by scping the file from my workstation to the server" is kind of a late 90s throwback, but golang is a 70s throwback, so I guess it fits?

[+] marcosdumay|3 years ago|reply
> I don't see what's special or better about compiling everything into a single binary, apart from fetishizing the executable format.

When you distribute your software to other people, it cuts the step of installing the correct interpreter... at the cost of requiring the correct computer architecture.

It is very likely a gain.

[+] pjc50|3 years ago|reply
> I don't see what's special or better about compiling everything into a single binary, apart from fetishizing the executable format.

Indeed. If you think of the docker image itself as an executable format like PE or ELF, this becomes clearer. Rather than targeting the OS API, which has completely the wrong set of security abstractions because it's built around "users", it defines a new API layer.

> "I can deploy by scping the file from my workstation to the server"

I kind of miss cgi-bin. If we're ever to get back to a place where random "power users" can knock up a quick server to meet some computing need they have, easy deployment has to be a big part of that. Can we make it as easy to deploy as to post on Instagram?

[+] adql|3 years ago|reply
Well, system-wise Go app is just a binary that only needs network access, could be run directly from systemd and just have all permissions set there.

Docker is a bunch of file mounts and app running in separate namespaces. So extra daemon, extra layers of complexity. Of course if you're already deploying other docker apps it doesn't really matter, as you'd want to have that one binary in docker container anyway just to manage everything from same place.

[+] zer00eyz|3 years ago|reply
From the article:

"Standing here it looks like Docker was invented to manage dependencies for Python, Javascript and Java. It looks strange from a platform that deploys as one single binary."

Let me say the quiet part out loud: Docker is covering up the fact that we don't write deployable software any more.

Go isn't perfect either. The author isn't dealing with assets (images anyone?).

I think there is plenty of room for innovation here, and were over due for some change.

[+] twic|3 years ago|reply
I feel like i'm taking crazy pills (at a low dose) when i read this stuff.

I deploy Java applications. In a runnable condition, they aren't a single file, but they aren't many - maybe a dozen jars plus some scripts. Our build process puts all that in a tarball. Deployment comprises copying the tarball to the server, then unpacking it [1].

That is one step more than deploying a single binary, but it's a trivial step, and both steps are done by a release script, so there is a single user-visible step.

The additional pain associated with deploying a tarball rather than a single binary is negligible. It simply is not worth worrying about [2].

But Go enjoyers make such a big deal of this single binary! What am i missing?

Now, this post does talk about Docker. If you use Docker to deploy, then yes, that is more of a headache. But Docker is not the only alternative to a single binary! You can just deploy a tarball!

[1] We do deploy the JDK separately. We have a script which takes a local path to a JDK tarball and a hostname, and installs the JDK in the right place on the target machine. This is a bit caveman, and it might be better to use something like Ansible, or make custom OS packages for specific JDKs, or even use something like asdf. But we don't need to deploy JDKs very often, so the script works for us.

[2] Although if you insist, it's pretty easy to make a self-expanding-and-running zip, so you could have a single file if you really want: https://github.com/vmware-archive/executable-dist-plugin

[+] avg_dev|3 years ago|reply
I feel like I agree with the general ethos of the project. And I am also a fan of pragmatism; I think Fred Brooks referred to our trade as “toolsmiths” and I feel it is an apt word. Our work product exists solely to fill a need or to enable things that were not previously possible. I feel like I work hard not to be an idealist or to view well-written code as an end to itself.

But I must confess,

> Systemd also restarts the app daily to make sure it works properly long term

leaves me with a viscerally negative feeling. I feel like daemons should be able to run for years unless there is some kind of leak. Maybe I am wrong.

[+] speed_spread|3 years ago|reply
You can see the daily restart as a restartability test. Programs expecting to run forever may develop "bad habits"
[+] layer8|3 years ago|reply
They should, but you don’t know if they will, and, when they suddenly crash after two years, whether they will be able to restart. Restarting daily ensures that any problems will be caught early, and that the last known-good configuration is only a day ago and not two years ago.
[+] ed25519FUUU|3 years ago|reply
Static binaries and automatic code formatting (no debates on code format whatsoever) are two incredible qualities of Go that should be copied to every new language but for whatever reason are mostly left out.

And I’m not taking about “making a static binary in $LANG is easy, just follow these 7 steps…” trust me it’s nothing like Go then.

[+] danwee|3 years ago|reply
How does one handle zero downtime deployments with single-file golang binaries? I remember I tried this setup some time ago and I couldn't successfully manage cleanly to accomplish no downtime when deploying a new version of my service. The reason was mainly port reuse. I couldn't have the old and the new version of my service running on the same port... so I started to hack together something and it became dirty pretty quickly. I'm talking about deployment of new version of service on the same machine/server as the old version was running.
[+] klodolph|3 years ago|reply
Some of this is solved by using e.g. systemd, depending on your needs.

> I couldn't have the old and the new version of my service running on the same port...

You can, actually! You just can’t open the port twice by default. So one or both of the processes needs to inherit the port from a parent process, get passed the port over a socket (Unix sockets can transmit file descriptors), or use SO_REUSEADDR.

There are some libraries that abstract this, and some of this is provided by tools like systemd.

Some of this is probably going to have to be done in your application—like, once your new version starts, the old version should stop accepting new connections and finish the requests it has already started.

[+] adql|3 years ago|reply
Same way you do with any other app not specifically designed for it; you start 2 copies of it and put loadbalancer in front of it. I did that via some systemd voodoo

But TECHNICALLY to do that in one without external proxy you'd need to figure out how to set SO_REUSEPORT for the web socket handler, then start the second one before the first.

Haven't actually tried it but someone apparently did: https://iximiuz.com/en/posts/go-net-http-setsockopt-example/

You'd still have any ongoing connections cut unless you unbind socket and then finish any existing connection, which would be pretty hard with default http server.

I just put HAProxy instance on my VPS that does all of that, including only allowing traffic once app says "yes I am ok" in healthcheck. Then the app can have "shutting down" phase, where it reports "I am down" on healthcheck but still finished any active connections to the client.

[+] hnarn|3 years ago|reply
This doesn’t sound Go-specific, if you use something like haproxy targeting multiple nodes you can take them down one by one to perform a rolling upgrade.
[+] bojanz|3 years ago|reply
Socket activation via systemd[0] is an option, assuming you are fine with certain requests taking a longer time to complete (if they arrive while the service is being restarted). Otherwise using a proxy in front of your app is your best bet (which has other benefits too, as you can offload TLS and request logging/instrumentation).

- https://github.com/bojanz/httpx#systemd-setup

[+] iamjackg|3 years ago|reply
If you really don't want to use different ports you can handle it with Docker. Since each container has its own IP, they can all expose the same port. Otherwise, for non-containerized deployments you'll have to resort to two different ports.

In either case, you will need a reverse proxy like Traefik/Nginx in front to smartly "balance" incoming requests to the two instances of the service.

[+] Edd314159|3 years ago|reply
I guess this is a problem inherent not just in a single-file go app, but in any deployment where the whole stack is contained within a single process.

The post says the process starts up quick enough that the process being temporarily unavailable isn't noticeable - but what if the process _doesn't come back_? It's also impossible to do blue/green deployments this way.

It's clearly not a solution suitable to large-scale deployments. The simplicity has its trade-offs.

[+] dividuum|3 years ago|reply
Can't help with how to implement this, but just to be sure: You should be able to use the same port in multiple instances if you bind those with SO_REUSEPORT. A quick search points to https://github.com/libp2p/go-reuseport for an implementation. Now you just need a mechanism to drain the old process.
[+] tinglymintyfrsh|3 years ago|reply
You don't. You can sort of emulate it with services that stateless, load-balanced, and L7 proxied.

If you want stateful zero downtime deployments, use Elixir or Erlang that has the ability to live migrate data from one version of code to the next.

[+] marcosdumay|3 years ago|reply
You can always share ports.

But the one way to do no downtime deployments is to have more than one server.

[+] SkyPuncher|3 years ago|reply
I've always run services behind a proxy. Spin up a new server with the code (works for any type of deployment). Validate it's up. Switch the proxy from the old to new server.
[+] RcouF1uZ4gsC|3 years ago|reply
I feel docker in many cases is a hack for languages and runtimes that don’t support single file static linked binaries.

Often a single binary is a simpler and better option instead of a docker container.

[+] ye-olde-sysrq|3 years ago|reply
I more view it as us recognizing that there's more to "a system" than a binary. Kubernetes is this concept taken to its conclusion (since it defines everything in code, literally everything). But docker is often a super convenient middle ground where it's not nearly as stupidly verbose to just get a simple thing running, but still checks a lot of the boxes.

I used to feel similarly with Java. "Why," I asked, "would you need this docker thing? Just build the shaded JAR and off you go."

And to be sure, there are some systems - especially the kind people seem to build in go (network-only APIs that never touch the fs and use few libraries) - that do not need much more than their binary to work. But what of systems that call other CLI utilities? What of systems that create data locally that you'd like to scoot around or back up?

Eventually nearly every system grows at least a few weird little things you need to do to set it up and make it comfy. Docker accommodates that.

I do think there's a big kernel of truth to your sentiment though - I loved rails as a framework but hated, just hated deploying it, especially if you wanted 2 sites to share a linux box. Maybe I was just bad at it but it was really easy to break BOTH sites. Docker has totally solved this problem. Same for python stuff.

I do think docker is also useful as a way to make deploying ~anything all look exactly the same. "Pull image, run container with these args". I actually think this is what I like the most about it - I wrote my own thing with the python docker SDK, basically a shitty puppet/ansible, except it's shitty in the exact way I want it to be. And this has been the best side effect - I pay very little in resource overhead and suddenly now all my software uses the exact same deployment system.

[+] mariusmg|3 years ago|reply
Often ?

It's 100% scenario. Complex apps (like Gitea for example) delivered as a single binary is basically the pinnacle of deployment.

[+] mastax|3 years ago|reply
> Systemd holding connections and restarting the new binary.

How does this work?

Or does it just mean it stops new connections while it's restarting?

[+] christophilus|3 years ago|reply
Systemd effectively acts as a proxy. I don't know that it's actually a proxy, but it keeps accepting connections from what I've seen. I use it for zero-downtime single-binary deploys, and it's great.
[+] ethicalsmacker|3 years ago|reply
It's not always a static binary, if you use any os/config stdlib function calls or DNS look ups. In that case you need to specify CGO_ENABLED=0 to force static builds.

I have been doing single binary full website deploys for about ~16 months in production. That includes all html, css and js embedded. It has been wonderful.

[+] adql|3 years ago|reply
And with a little bit of code you can do switching between "use embedded files/use local files" on the app start easily and have convenience of not having to re-compile app to change some static files.
[+] lenkite|3 years ago|reply
Is this article from 2016? You can do all this with Java nowadays. I have observed a lot of folks on HN whose last knowledge about Java was from a decade plus ago pontificating about Java deficiencies that no longer exist today.

Use the GraalVM native build tools https://graalvm.github.io/native-build-tools/latest/index.ht....

"Use Maven to Build a Native Executable from a Java Application"

https://www.graalvm.org/22.2/reference-manual/native-image/g...

[+] anderspitman|3 years ago|reply
If you're looking for a similar deployment experience, but can't use Golang, we've been using Apptainer (previously Singularity) for a couple years at work. It's really nice to be able to get the benefits of containers while retaining the simplicity of copying and running a single file. Only dependency is installing Apptainer, which is easy as well.

[0]: https://apptainer.org/

[+] avgcorrection|3 years ago|reply
> Fast-forward and we were automatically deploying Scala applications from CI bundled in Docker in the startup of my wife.

> Last forward and I have deployed a Golang application to a cloud server.

Editor hello?

[+] marcrosoft|3 years ago|reply
How is this news. Welcome to 10 years ago.
[+] nathants|3 years ago|reply
(swirls fancy wine)

pairs well with single file frontends.

[+] dicroce|3 years ago|reply
A single binary is nice...

Perhaps second place is using the rpath origin linker option to create a relocatable application.

[+] mirekrusin|3 years ago|reply
We're bundling b/e services from typescript monorepo in production as single bundle files - it works very well. Main reason was simply enforcing lockfile from monorepo.