top | item 21735176

Using Makefile(s) for Go

137 points| prakashdanish | 6 years ago |danishpraka.sh

101 comments

order
[+] IHLayman|6 years ago|reply
I use Makefiles for Go projects all the time, but not in the way the article describes. First off, in a pre `go mod` world, if you had dependencies to check before running the build, then a Makefile was the easiest way to manage that. But even in a post `go mod` world, there are good reasons to use one that the article totally overlooks:

* Makefiles introduce a topological sort to build steps. This is the reason you use it instead of build shell scripts: it allows build steps to run in parallel, it guarantees order by dependency which is the best way to read build steps, and it makes file freshness an easy element to check for a build step, which is still needed for Go projects with multiple subpackages.

* Go projects usually have more than go files that are required in making an executable. If you run a web server and you are bundling static pages into your executable, Makefiles are the best way to handle that.

* If you are building for multiple architectures, or want to encode the git tag/branch into the executable, it is better to have that Makefile bake in the necessary options on the build step and keep it uniform across the build.

* If you write a Go file and bake that into a Docker image, I find it best to drop the image and container hashes into files so that I can get to them easily for docker exec/attach/rm/rmi commmands.

But there is one bigger reason Makefiles work for our entire team. We standardize on using Makefiles as the entrypoint for our builds. We have a polyglot environment at work so sometimes it gets confusing to figure out how to build a project. By standardizing on running 'make' we are all on the same page. Have a Javascript project to webpack? Run make and have make call yarn. Have a python wheel to construct? Run make and have make call python setup.py. You have a Java project that requires a sequence of maven commands to build? Run make and have the makefile call maven.

Is that inefficient? You bet it is. Does it make it easier to sort out what to do to build a project for the first time? Yes it does. Does it make it 100% easier for our CI/CD framework to work with multiple languages and scan for the necessary compilers and dependencies? Heck yeah.

[edited for lousy formatting]

[+] panpanna|6 years ago|reply
That was an excellent comment!

I use make for almost all my projects (regardless of language) and I have a system where "make init" sets up the environment (install packages, set up containers, and so on) and "make run" runs it and "make test" tests it.

Now I can come back to projects from 5-10 years ago and get them running with minimal effort since all the magic is in the makefile and not my in forgetful brain.

[+] papito|6 years ago|reply
Also - Make will exist pretty much in any Unix-based environment. Any alternatives will require prerequisite installation of things.

Although, I work in a team where a lot of devs are on Windows, and they complain about it.

[+] lloeki|6 years ago|reply
> This is the reason you use it instead of build shell scripts: it allows build steps to run in parallel, it guarantees order by dependency which is the best way to read build steps, and it makes file freshness an easy element to check for a build step

This is indeed true way beyond Go. Alternatives/replacements to make (rake, scons, bespoke shell scripts, whatever) make this in a range going from painfully non obvious to downright impossible.

For all its limitations and reputation for complexity, Makefiles can achieve a form of simplicity that renders this very simple, outrageously self documenting, and language independent.

[+] echlebek|6 years ago|reply
I've seen a lot of developers, especially developers with C backgrounds, reach for Makefiles when approaching Go development, and I think it's a harmful practice. So, I'd like to offer a respectful but firm rebuttal to this article. :)

I dislike using make(1) with Go for two reasons.

The first is that make was developed for building C projects, and therefore is oriented around that task. Building C projects is a lot different than building Go projects, and it involves stitching together a lot of pieces, with plenty of intermediate results.

make(1) has first class support for intermediate results, which are expressed as targets.

If you look at the article, the author has to use a workaround just to avoid this core feature of make(1).

The second reason I dislike using make(1) for Go projects is that it harms portability.

A Go project should only require the Go compiler to build successfully. Go projects that need make(1) to build will not work out of the box for Windows users, even though Go is fully supported on Windows. For me, this puts Makefiles into the "nonstarter" category, even though I do all of my own development work on Linux. There is just no reason to complicate things for people who don't have make(1) installed.

For code generation and other ancillary tasks, Go includes the 'go generate' facility. This feature was created specifically to free developers from depending on external build tools. (https://blog.golang.org/generate)

For producing several binaries for one project, use several different main packages in directories that are named what you want your binary to be.

Edit: corrected some terminology.

[+] akerl_|6 years ago|reply
I think there’s a distinction to be drawn here between a couple use cases for Makefiles (specifically for building software):

* Makefiles can act as shortcuts for common existing functionality of the build toolchain

* Makefiles can add new functionality that is not part of the build toolchain

* Makefiles can add new functionality that replicates existing functionality in the build toolchain

An example of the first case is one of the first examples in the article: using `make build` to run `go build`. The second includes things like the later example for `make docker-push`. The third includes things like makefiles that generate intermediate files or other things that `go generate` could do.

Only the 3rd thing can really meaningfully harm productivity, but in my experience it’s the least common usage of `make`. A Makefiles that wraps `go generate && go build` into `make build` seems fully outside the scope of the portability concern, since a user without Make could just run the same commands themselves. Likewise, a Makefiles that adds `make release` which uploads the build artifact to GitHub Releases or similar isn’t replacing something the go toolchain could do, so it’s also not affecting portability. The user without Make couldn’t have used docker-push anyways, since the go compiler doesn’t support pushing release assets.

[+] majewsky|6 years ago|reply
You make it sound like `go install` and `go test` are the only things you're ever going to run in a Go repository. This is blatantly untrue. For example, these are the invocations for the test suite for one of my Go programs:

https://github.com/sapcc/limes/blob/364317fa9a25065bcf9384c8...

Why should I have to enter all of this manually every single time?

(And before you argue that gofmt, golint and go vet run in the editor if you've set it up properly: That's true, and that's how I have my editor set up. That part of the test is to catch the external contributors that don't.)

> The second reason I dislike using make(1) for Go projects is that it harms portability. A Go project should only require the Go compiler to build successfully.

For many of my projects, a Makefile is the main reason why repos work with `go get` at all. I use `make` which prepares all the generated files and non-Go artifacts (typically bundled into `bindata.go`), so that I can commit these in the repo. Then when a user comes along, they can `go get` the application because all the bespoke compilation steps have already been done by me via my Makefile. An example of this: https://github.com/majewsky/alltag/blob/df161b55fa4c7eba0abe...

[+] Foxboron|6 years ago|reply
>A Go project should only require the Go compiler to build successfully.

But they don't. The go compiler doesn't support yarn, npm, protobuf, open-api generators, doc generators like md2man, go-bindata-assetfs, gox and everything you need to complete the code generation done in modern Go applications.

So how do you orchestrate this? People use Makefiles, bash scripts, go scripts and everything in between and combined. It gives you a plethora of bewildering and confusing build options which can't be solved with `go build`, and neither with `make` in a straight forward fashion. Add `go get -u ... && go mod vendor` with some `npm install` in the Makefile, along with some overriding and/or ignoring `$GOFLAGS`, `$LDFLAGS` and `$CGO_LDFLAGS` and you got yourself an ecosystem hostile for packaging and compilation.

A go project can't use `go build` by itself - but it can't really use the Makefile either as people overengineer the process.

But let me stress this. Always use plain Makefiles over any other methods. It's there and has been used for decades for a reason.

[+] IHLayman|6 years ago|reply
"For code generation and other ancillary tasks, Go includes the 'go generate' facility. This feature was created specifically to free developers from depending on external build tools. (https://blog.golang.org/generate)"

Please please please don't use go generate! While I respect your position, go generate is the worst and I hope they eventually deprecate it in future go versions. We tried go generate in some of our code and it went very badly:

* go generate is placed in a comment line. Comments should never be executable, they should be used for explanation. If I am trying to trace execution, I shouldn't be forced to scan through comments looking for side effects.

* from the go generate man page: "Within a package, generate processes the source files in a package in file name order, one at a time." You can't order your generate commands in a way you want, you have to order them using file order, or keep all of your go generate lines in the same file, which defeats the purpose of go generate.

* go generate will run every time, regardless of freshness of file. So if you need to run protoc or some other protocol buffer compiler, you have to regenerate it every single time regardless of whether or not it is needed which makes the build run way slower.

* What are the dependencies of this project? If I use go generate, I have to run some clumsy grep command to (hopefully) find all of the go generate comments in the package.

Sorry, go generate is to golang what COMEFROM is to INTERCAL. Please avoid if you can help it. If that means a shell script or heaven forbid a makefile, so be it.

[+] aequitas|6 years ago|reply
I agree if you end up using Make like its used in C projects: compiling intermediate objects, linking them, etc. In that case the Go compiler should suffice.

I almost exclusively use Make for projects in any language nowadays as workflow automation, this includes Go, Python, Terraform, Docker builds.

In this way Make is a indispensable tool for me. As for me it's portable where it matters (macOS, Linux, WSL), it's ubiquitous, it has a stable API and it's behaviour is well know to me. Sometimes I have to work around some shortcomings of Make, eg: things that don't produce a file as result, where you `touch` a fake artifact file. But this is a minor annoyance for me compared to what Make brings in term of how simple and declarative I can automate my workflows.

[+] downerending|6 years ago|reply
IMHE, a language that fights against 'make' is generally poorly designed in that regard. Typically its functionality gets replaced/reinvented by a bespoke and buggy behemoth, which becomes yet one more thing to learn.
[+] robbyt|6 years ago|reply
Make is available everywhere that matters, and is a simple declarative way to encompass build actions. What are the alternatives?

Bash? Not declarative, and requires lots more code.

Some go rewrite of Make? Not universal, possibly not maintained in the future.

Rake? Ugh, Ruby.

I strongly believe that make is the least worst way to build go projects, but please change my mind by suggesting some alternatives, not by complaining about the shortcomings of make.

[+] ajross|6 years ago|reply
> The first is that make was developed for building C projects

This is sort of a misconception. The C compiler was developed for building C projects. Make exists because those projects had to build other stuff and needed a way to stitch the files together. Make's only built-in support for "C" amounts to some default rules for building .o files out of .c files.

If all you want from your build system is to compile a big unified blob of source in a single language into some kind of output file (like the examples you cite) you don't need make, just use whatever it is that your local language provides.

When you have requirements that go beyond that, where you have programs (often themselves built locally, and often in variant languages or runtimes) generating custom intermediates and need to track that madness, that's when you need a more complicated build manager than your compiler provides.

And that's when you start to understand why, despite four decades now of attempts to replace it, some of us still reach for make.

[+] rad_gruchalski|6 years ago|reply
And when you distribute the project, how do you „document” all possible build / packaging / release / test options. Shell script? Readme?

I look at things like Jaeger and all I see is a Makefile with all possible operations for that project neatly placed in a single portable, actionable format. If I have no make, sure, I’ll copy paste the command and run manually. But why would I?

edit: spelling

[+] mikojan|6 years ago|reply
Why do people on this website use "make(1)" instead of "make" in writing? And I know it's in the man pages but what is this number even for?
[+] kerng|6 years ago|reply
As soon as you have a semi-complex project makefiles or some other custom scripts are required. Go tools alone won't do. There is this attitude that sees Go as the center of the universe, it is not. If people say idiomatic, it makes me laugh.
[+] GhettoMaestro|6 years ago|reply
Who cares, really? I use Makefiles for everything from eliminating 8284738 random bash scripts to orchestrating global infrastructure deployments with Terraform in a docker container.

None of my above fits your “correct” view of make. But it works fine and has for years.

[+] _ph_|6 years ago|reply
I avoid using makefiles, unless I need them :). Yes, plain Go projects which are supposed to produce an executable probably won't need a makefile and I haven't used any for those. But if your project should produce a shared library, it is nice to wrap the build command in a makefile, as it is easier to type "make".

Also, when integrating into larger projects or when other tasks in addition to building the Go project are required, unifying them with makefiles can be helpful.

[+] jrockway|6 years ago|reply
If you download some random tar file with go code in it, I agree that you should expect to be able to build it with only go installed. "go get" depends on this, and largely works well!

But the day to day act of developing a system that uses go involves more than just building a go binary. Your go program might depend on things like generated protocol buffers. You need some way to regenerate those when you edit the definitions. That then involves having the right version of protoc installed and also having the right version of protoc-gen-go. The go compiler can't help you there. go generate suffers from the same problem; it's not automatic, so you can pass it the wrong dependencies (generation tool flags, version of the generator, etc.).

People are using makefiles as a convenient place to write all these extra instructions. "What flags to I pass to protoc?" "What flags do I pass to docker build?" Why document it when you can "make protos" or "make container"?

Unfortunately, make isn't actually good at this. It doesn't version the generation tools (or itself), so you will end up with vastly different results on different machines. The result is things like a 300 line diff to a generated protobuffer because the second engineer to work on that file happened to have protoc 0.7.7 instead of protoc 0.6.42, or they installed protoc-gen-go@master instead of [email protected]. Make doesn't care. It exited with exit status 0, so it must have worked.

What started as a nice way to write down some instructions for hacking the code has now become a giant mess. Reasonable makefiles can only ever work for one person on one computer at one point in time. At that point, they might as well be a README.md. At least the README can mention the version numbers of the dependencies, and, most importantly, can wish the reader luck.

There are two long-term solutions. One is to only use go. Write a program that reads the protos at runtime. Write a program that runs your Typescript through a hand-written compiler whose source code lives in your project at runtime. Now you only need "go build". This is... impractical, though. It's a nice ideal, but you'll never get anything done in the real world.

So what you really need is a real build system that captures every dependency, knows about high level tasks ("make foo.proto available to a go program"), and knows every dependency between files like the go compiler does. With such a tool, you can get a working build on every computer with no instructions or manual setup. And since it is carefully written to understand what it's doing, you can get reliable incremental builds. (The full build and your incremental build should have the exact same md5sum of the resulting binary.)

Such a tool does not exist. bazel is close. If you have to build more than just go files, you probably want to look into it. It's crazy. It's a lot of work. Don't do it if you're the only developer on the project. But if you want 10 random people to be able to build a project that's written in more than one language, you have to invest in some sort of tooling. Make is good for a one person team. Make can be scaled to do crazy things poorly (hi, Buildroot!). But it's probably not what you want to be using. If a README isn't good enough, you need a real build system.

[+] Sean1708|6 years ago|reply
It's not just Go, loads of projects use Makefiles as a collection of Bash scripts. I've never really been certain what it actually buys you...
[+] ascotan|6 years ago|reply
If you're using windows for development you're doing it wrong.
[+] IshKebab|6 years ago|reply
I agree. They claim `make` is simple, but it really isn't. PHONY targets are one example.

Unfortunately I've looked for an alternative and didn't really find anything very good. I eventually settled on a Python 3 script. Python 3 is reasonably nice to use with type annotations. It doesn't require compilation and its speed is fine if you're using it to drive other build systems, rather than as a build system itself. Way more people understand Python than Make, and it is a full programming language so you don't get stuck when you want to do something complicated.

It doesn't have a build in DAG task system but I'm sure there are a million libraries for that. I haven't had need of that yet, but a quick search turned out https://pydoit.org/ which looks ok.

[+] rraval|6 years ago|reply
This... isn't even using the `make` part of Makefiles at all.

If you look at the final example, every [1] rule is marked as `.PHONY`. `make` bundles 2 capabilities: a dependency graph and an out-of-date check to rebuild files. This demonstration uses neither.

The author would be better served with a shell script and a `case` block. The advantages:

- Functions! The `check-environment` rule is really a function call in disguise.

- No 2 phase execution. The author talks about using variables like `APP`, but those are make variables with very different semantics than shell variables (which are also available inside the recipes).

[1] Yes, there's a `check-environment` "rule" that isn't marked, but it likely should be since it isn't building a file target named `check-environment`.

[+] panpanna|6 years ago|reply
I disagree. Make is more than a build system, it's also an automation tool. It gives you a fairly flexible format for managing different tasks with shared variables and autocompletion and more.

You can do it with a bunch of shell scripts too but I prefer having everything in a single file.

[+] fragmede|6 years ago|reply
I'm more confused as to why use .PHONY in so many places. Golang builds from .go files whos modification times are changed when written to, same as .c and .cpp files, so make is able to know when the go compiler needs to be called, or not.
[+] mikegirouard|6 years ago|reply
It's really frustrating seeing so many Makefiles that don't _make_ anything.

Make syntax is really odd. I see so many folks go out of their way to deal w/quirks of make when they really just need a shell script. You can see this anti-pattern very quickly when you see `.PHONY` targets for everything.

I think make is useful for some aspects of go. GOPATH is becoming less relevant now, but still helpful when you want to have build-time dependencies in $PATH

    $(GOPATH)/bin/some-dependency:
        go get -u ...
I still use make when building artifacts, especially in CI. But as a default, I almost always try to talk folks out of using make for this sort of stuff.
[+] GordonS|6 years ago|reply
Is there a way to easily have something like make targets in a shell script, without a ton of boilerplate?

> Make syntax is really odd

I don't find it particularly strange, except my biggest peeve - the insistence on tabs!

[+] boomlinde|6 years ago|reply
This Makefile could as well have been a shell script. It doesn't track changes to dependencies even when it's obvious how to do so. For example, the build rule has an obvious dependency (main.go) and an obvious target ($(APP)). Instead of tracking these which IMO is the primary advantage of using Make, it deliberately destroys the existing build. docked-build always necessarily rebuilds the binary as well

Presumably, Go has some kind of build cache making such dependency tracking relatively useless anyway, maybe Docker has too, but if you aren't tracking dependencies and rebuilding only when necessary why use Make instead of a big switch in a shell script?

Personally I'd only use Make for Go if I introduce some task that takes significant time and isn't already handled by the go toolchain.

Another couple of notes: there are two docker-push rules. The first seems like it was meant to be docker-build. The other is that the docker build rule will tag the build with the HEAD hash, regardless of whether it's building from a clean checkout or a dirty repo.

[+] peterwwillis|6 years ago|reply
rm -rf ${APP} is a code smell. If ${APP} is not a directory, -r should not be in this command. At best it is possibly confusing, and at worst if somehow ${APP} accidentally becomes a directory it will just remove it and you will have no idea that it was a directory, whereas just rm -f ${APP} will fail because it can't unlink a directory. Build success is an important factor in a CI/CD pipeline, therefore builds should fail immediately under unexpected behavior.

Also, on .PHONY on a single line:

  But for Makefiles which grow really big, this is not suggested as it
  could lead to ambiguity and unreadability, hence the preferred way is
  to explicitly set phony target right before the rule definition.
If your Makefile grows really big, it's going to become a nightmare to maintain. Either split up your codebase + builds into sub-directories, or figure out some other way to structure your builds so that it's not super complicated to reason about or maintain them.
[+] boomlinde|6 years ago|reply
I find that complexity of a Makefile isn't necessarily a function of its size. Ideally, one should be able to reason about each target individually, specifying its dependencies without consideration for how they are generated, whether they already exist etc. In such an ideal situation, it doesn't matter how large the Makefile is. Maintenance problems IMO happen when you can't trust tasks to fully specify their dependencies or that task commands only generate the target output.

Over all I agree with your argument, though.

[+] finaliteration|6 years ago|reply
I’ve been using Makefiles for Go development basically since I started with the language. It’s really effective for me and makes compilation and, in my case, deployment to AWS Lambdas via CloudFormation commands (also invoked by Make) really simple. It’s also easy to bring someone up to speed with how building and deploying works.
[+] rob74|6 years ago|reply
It might make it easier to bring someone up to speed for your project, but he won't learn a damn thing about building other Go projects - I guess that's one of the arguments that the opponents of make, er, make...
[+] apeace|6 years ago|reply
Great article, but I'm not sure it's a good idea to segment your Docker images by environment. Part of Docker's appeal is that you can be sure your staging & production containers are bit-for-bit the same. I use a workflow like this:

* For all commits on all branches, run tests. If tests don't pass, don't push containers to registry.

* For all commits on all branches, build and push a container `{branchname}-{commitsha}` (assuming tests pass).

* Code review, etc.

* Merge pull request to `master` branch (tests will run, and only push a container if they pass).

* Deploy `master-{commitsha}` to staging.

* Do your final testing on staging.

* Deploy the same `master-{commitsha}` to production.

Now you're deploying to production from the master branch, which passed tests, and the container is the same one as you tested on staging.

Plus, you can always deploy your non-master `{branchname}-{commitsha}` images to a separate environment, or to staging, if you need to do a bit of experimenting.

[+] cesarb|6 years ago|reply
I noticed you didn't mention ".DELETE_ON_ERROR". AFAIK, it's recommended to always use it (according to the GNU make manual: "[...] 'make' will do this if '.DELETE_ON_ERROR' appears as a target. This is almost always what you want 'make' to do, but it is not historical practice; so for compatibility, you must explicitly request it.")
[+] hedora|6 years ago|reply
I suggest reading “recursive make considered harmful”. It is a wonderful introduction to make, and explains how to avoid a few pitfalls most make users (including this article) run into.

In particular, the targets in the subdirectory makefiles can and should be auto-generated using make itself. There’s no need for the makefiles in the subdirectories (there is also no need to use an external tool to generate them, which is the other mistake people often make).

[+] narven|6 years ago|reply
Nice article. I use makefiles a lot, mostly for all projects that I use, both for frontend and backend, mainly to have the same commands independently of framework/platform that im using. For me its helpful to just run `make` both to build and run a go project and a react project.

Another thing you can add is:

.DEFAULT_GOAL := start

start: fmt swag vet build run

Helps define you default command soyou just need to run `make` and will run all inside of `start`

Since most of us use `.env` files for enviroment files, you can use something like:

# this allows to import into this file all current system envs

include .env

export

And it will inject all of .env file into the current running `process`

Also have some other shortcuts (variables):

GOCMD=go

GOBUILD=$(GOCMD) build

GOCLEAN=$(GOCMD) clean

GOTEST=$(GOCMD) test

GOFMT=gofmt -w

GOGET=$(GOCMD) mod download

GOVER=$(COCMD) vet

GOFILES=$(shell find . -name "*.go" -type f)

BINARY_NAME=my-cool-project

BINARY_UNIX=$(BINARY_NAME)_prod

[+] ascotan|6 years ago|reply
Going to throw out my 2 cents here:

1. I don't like multiple makefiles. Icky with lots of duplication and high maintenance. Bad article.

2. When possible I use target expansion to generate targets in the main makefile:

APPS:= app1 app2 app3

$(APPS:%=build.%)

3. I prefer to use makefile functions rather than reaching for "bash" where possible: https://www.gnu.org/software/make/manual/html_node/Functions...

4. If something is really complicated - extract to bash

Make has been written in 10K languages and the original is still the best

[+] 3fe9a03ccd14ca5|6 years ago|reply
I don’t mind Makefiles, and usually use them for basic command configuration.

However, when the logic gets even a little complicated and I almost always reach for bash. Everybody knows a little bash, and it’s available in almost all systems.

[+] jjuel|6 years ago|reply
Not related to the contents of the article, but I love the font on that site. I also love the simplicity of the site as well. Nothing takes away from the ability to read. It is so clear and concise.
[+] knowsuchagency|6 years ago|reply
There really is no perfect build tool, but in my experience, nothing touches invoke http://www.pyinvoke.org for building and automation.

Any project will eventually have build and deployment scripts with non-trivial amounts of logic in them.

The question then becomes whether you want all that complex logic in shell scripts, makefiles, or Python.

For me, it's a no-brainer. I'll take the latter every time.

[+] cwojno|6 years ago|reply
You don't use makefiles in go! You just take your code, copy go.mod and go.sum into a Docker image, then RUN mod download, then re-copy the rest of the code and run bui...

Shit... Docker is a makefile...

[+] e2le|6 years ago|reply
For projects that use tags to turn on/off compilation options, Makefiles and shell scripts make sense.