Make is great, and I wish more people would use it in place of whatever monstrosity is en vogue this week.
However, there is one thing which Make absolutely cannot handle, and that is file names with spaces. If you have any risk of encountering these without any possibility of renaming them, you’ll sadly have to give up on using Make; it just won’t work.
Spaces in filenames break most of make's builtin functions such as $(sort), and break the $?, $^ and $+ automatic variables. But they're OK in target names as long as you escape the spaces with backslashes. In some cases you can also use them in source file names -- you have to hard code the names in the build rules since $^ won't work (but for targets built from a single source file, $< still does work).
This applies to GNU make; not sure about AT&T make.
Make is my default. However, as the project grows, I find myself wanting to organize modules by directory and I find it cleaner to switch to CMake which then generates the Makefiles for me.
Just off the top of my head, it could be object oriented (rules could be subclassed), the language could have more sophisticated statements, it could have debugging, etc
I agree, and the solution to this problem is to forbid filenames with spaces. The convenience of make and similar tools is much more important than spaces in filenames. File names with spaces should not be allowed in modern filesystems. When the user types a filename with spaces, the GUI should encode the space as a non-breaking space character, that does not cause havoc in scripts.
A way around that is to convert any spaces in a filename to non-breaking spaces (if you can). That will not only fix problems in Make, but also ease use in the command line.
Recently I joined an environment that uses Makefiles as the facade in front of pretty much everything, from git submodule update shortcuts to building code and running local development servers.
Surprising myself, I’ve quickly grown to appreciate working Makefiles. That said, since the syntax somewhat encourages terseness, when I need to fix a non-trivial target it tends to look like black magic—nothing reading a few man pages can’t fix, but it takes extra time.
It’s not my first choice overall, I prefer to leave out the extra layer and document direct command-line calls in a README. If a commonly used tool changes its call in a new version, with README it’s a documentation issue, but with Makefile it’s broken software.
What I love about Makefiles is that they just use the CLI tools. Full build tools like Gradle or Bazel require installing specific plugins and learning a new inferior syntax, making them a nightmare to use if you need to use a non implemented feature of the underlying tool. The biggest pain point is also that they don't even bother to print the actual command being executed!
I recently used make in a side project[1] to implement a "full" continuous delivery pipeline and it really was refreshing, despite the syntactic quirks.
make works well when you are targeting a single platform with a decent shell and the project is not too complex (e.g., no auto-generated source code that requires its own automatic dependency tracking). Once that no longer holds, make becomes a real liability.
Note that the author of that paper (a friend of mine) wrote another build system, the dead-simple-but-awesome make.py. I have a mirror/fork of it[0], since it's been unmaintained for a while (but it mostly doesn't need any maintenance).
The entire build system is a single Python script that's less than 500 lines of code. Rather than trying to fit complicated rules into Make's arcane syntax, rules are specified with a Python script, a rules.py file (see [1]). But the script should be thought of more as a declarative specification: the rules.py file is executed once at startup to create the dependency graph of build outputs, and the commands to build them.
Yet, despite the small size, it's generally easier to specify the right dependencies, do code generation steps, and get full CPU utilization across many cores.
At some point I'd like to write more about make.py and try to get it used a bit more by the public...
What I typically do is use make only for what it is good: as a dependency resolution back-end.
All the build logic for my projects is written in Python, in an executable file stored in the project root directory and called "make" (I have "." in my PATH).
The Python script, when it runs, generates on the fly a clean, lean, readable, unrolled, Makefile and feeds it directly to /usr/bin/make via a pipe.
Works like a charm:
Python (a sane and expressive programing language) to express the high-level logic needed to build the project.
Make as a solid back-end to solve the "what needs to be rebuilt" problem (especially the parallel version with -jXX)
There's at least one more usecase IMO: definition of common development lifecycle steps in a shared Makefile across services. At my current workplace, instead of having a bunch of bash scripts in every service, I just give every service repo a Makefile that usually is a oneliner where common.mk is included. this just wraps docker-compose and gives us commands like make, make run, make stop, make lint, make test, make help etc.
This way we can e.g. have repos using completely different technology stacks but the interface to them is the same - whether it's our database, a node.js webservice, a python data analytics tool etc. and the definition of these lifecycle commands in common.mk are totally trivial, they're just .phony. one-liner rules.
i like make primarily as an 'entry point'. there are better tools for dependency management and building, usually language-specific, but as the OP notes, that also requires remembering each tool's invocation details, and documenting them in the README for anyone else wanting to build your project. it's easier to capture the tool invocation in a Makefile and let that serve as your primary interface.
"Command output is always buffered. This means commands running in parallel don’t interleave their output, and when a command fails we can print its failure output next to the full command line that produced the failure. "
This kind of projects always makes me sad: you've got to read and debug through an impenetrable wall of custom python code to understand why the build fails.
Makefiles are great entry points for ci/cd pipelines. It's easy to pass arbitrary environment variables at runtime, targets to build, define basic dependencies, and have clear steps to execute that can include some minimal inline shell. And since it's pretty dependency-less, I can run the same make commands locally to test the pipeline as I'd use in a remote CI system.
I often use them as a wrapper for Terraform weirdness, where you may want to call an ADFS-enabled AWS login tool or not, depending on if `aws sts get-caller-identity` returns. Or assume a role before running all targets. Or extract values from a terraform.tfvars.json, to pass to the above two steps. Or bootstrap a remote backend if it doesn't exist. Or remove stale module symlinks. Or properly run init, get, and validate before running a plan or apply. Or document weird -target usage. The end result of just running make prep and make apply with no further knowledge required is exactly the experience I wanted out of Terraform initially.
I sometimes use GNU Make to fire off custom code generators, before the files are handed off to other parts of the toolchain which can have their own complicated dependency management. This works quite well. The one annoying problem that I often encounter is that Make does not handle multiple targets (i.e. the code generator generates multiple files, e.g. 'file1.h', 'file1.cpp', 'file2.h', 'file2.cpp', 'test.cpp'). I usually end up inserting a bunch of .PHONY targets, which causes unnecessary evaluation of the dependency graph, but at least it works, instead of breaking in seemingly random ways.
My other use of Makefiles is to capture small (< ~5 lines) of bash, python or such scripts for doing certain things within a directory. I find that to be more efficient than documenting that sort of info in a README.md file.
Not sure what you mean by "Make does not handle multiple targets".
You can definitely do `make foo bar` and it will run the recipes for both foo and bar. You can also write a recipe with multiple prerequisites (which could be the result of a variable expansion).
Curious about the limitation; could be there's a way around it or that I never ran into it.
Make is very programmable; here's something from our code base:
# Yes I am aware that this looks like TECO and Prolog had a baby.
$(foreach prog,${3P-nonboost-packages},$(patsubst %,3P-build-%/${prog},${MAKE_CONFIGURATIONS})): 3P-src/$${@F} $(patsubst %,toolchain-%/_env,${CONFIGURATIONS})
@echo Building ${@F} for $(subst 3P-build-,,${@D})
@mkdir -p $@
@if [ -f "$</CMakeLists.txt" ] ; then \
${env-$(subst 3P-build-,,${@D})} cd $@ ; \
cmake -DCMAKE_MODULE_PATH='../../3P-$(subst 3P-build-,,${@D})/lib;../../toolchain-$(subst 3P-build-,,${@D})/lib' ${${@F}-cmake} -DCMAKE_INSTALL_PREFIX=../../3P-$(subst 3P-build-,,${@D}) ../../$< && cmake --build . ; fi
Using a plethora of disconnected, non-build targets in a Makefile to provide a "make <command>" language sometimes seems like such an anti-pattern. Those commands just want to be simple scripts, right?
Why does that pattern persist? I believe it is for these psycho-technical reasons.
1. The current directory "." is usually not in PATH for security reasons. But make ignores that; it reads a Makefile from the current directory.
The psychological hypothesis here is that people somehow like typing
make bundle
make yarn
make db-reset
compared to the no-Makefile alternative scripts:
./bundle
./yarn
./db-reset
Something always feels off about running a program as ./name.
2. If there are any shared make variables between the non-build utility steps like "make bundle" and actual build steps, then it's easier for those utility steps to be in the Makefile so they can interpolate the make variables. The scripted alternative would be to have shell variables in some "vars.sh" file that is sourced by all the commands. But then somehow the Makefile would have to pick those up also in some clean way, probably requiring a ./make wrapper:
#!/bin/sh
. ./vars.sh
# propagated needed subset of vars to make
make FOO="$FOO" BAR="$BAR" "$@"
So I think these are some of the main sources of the "pressure" for various project-related automated tasks to go into the Makefile.
Another source of the pressure is that the "<command> <subcommand>" pattern is present elsewhere, like in version control tools "quilt push", "git blame", ...
It has the technical advantage of namespacing. If you have a make target called "ls", then "make ls" doesn't clash in any way with /bin/ls.
This uses ImageMagick commands to massage the various image files into the desired form without me having to manually invoke the commands image by image. Admittedly, on looking at it, I don't think I got a great deal of dependency-tracking mileage out of make in this case, because the source images weren't actually changing—only the build process was changing, and make doesn't track that (although redo, for example, does.) But in cases where you're dynamically adding new input files, make is super helpful for generating thumbnails or whatever from them. As long as the filenames don't have spaces.
My most immediate work task for the morning is helping a colleague figure out why SCons is failing to build the JNI binding for our project, although the old makefile builds it fine. Sigh.
Make has very serious problems with it's design in my opinion. It's builds are not hermetic. There's no way go distribute/include another person's make file. The language it uses is extremely complicated and focuses on being compact instead of easy to understand.
I with we all dropped makefiles and decided on a single build system I'm the bazel-lineage to lean on. The world would be a better place if everything came with BUILD files.
Gnu make build systems can horrible to debug when they get complicated. Cmake or some of the other build systems can generate makefiles in addition of checks executed prior to building the project which are useful for finding all dependencies. I find it easier to work with cmake than with pure makefiles.
I use Makefiles when I'm learning the ropes of a new system build tool. E.g. "I want to do <foo>, so I run `make <foo>`", and the make target named <foo> has all the commands to build what I want. I did this when I was learning how Docker worked. I put the incantation to build a new image into a Makefile, as well as how to run the container and exec into it. Not the best system, but works for me as a kind of living notebook.
I run Makefiles in other places too. <3 Couldn't live without it.
Make is like the Lisp of the build world. It's powerful and you can build anything with it, but it won't be compatible with anyone else's stuff the way it would be in a more opinionated system, so you can't leverage other peoples' work much.
I used make for decades, then switched to CMake, got burned too many times, and now I've moved on to Meson. There really isn't a good build system for C/C++, which is a shame :/
[+] [-] teddyh|6 years ago|reply
However, there is one thing which Make absolutely cannot handle, and that is file names with spaces. If you have any risk of encountering these without any possibility of renaming them, you’ll sadly have to give up on using Make; it just won’t work.
[+] [-] upc031901|6 years ago|reply
This applies to GNU make; not sure about AT&T make.
[+] [-] umvi|6 years ago|reply
[+] [-] m463|6 years ago|reply
However, the implementation is dated.
Just off the top of my head, it could be object oriented (rules could be subclassed), the language could have more sophisticated statements, it could have debugging, etc
[+] [-] pif|6 years ago|reply
[+] [-] enriquto|6 years ago|reply
[+] [-] imglorp|6 years ago|reply
cringe
[+] [-] kazinator|6 years ago|reply
Let the users of the software do that at run-time, if they are so inclined.
[+] [-] mmjaa|6 years ago|reply
http://detox.sourceforge.net/
Detox has solved endless amounts of hassle...
[+] [-] spc476|6 years ago|reply
[+] [-] draw_down|6 years ago|reply
[deleted]
[+] [-] goblin89|6 years ago|reply
Surprising myself, I’ve quickly grown to appreciate working Makefiles. That said, since the syntax somewhat encourages terseness, when I need to fix a non-trivial target it tends to look like black magic—nothing reading a few man pages can’t fix, but it takes extra time.
It’s not my first choice overall, I prefer to leave out the extra layer and document direct command-line calls in a README. If a commonly used tool changes its call in a new version, with README it’s a documentation issue, but with Makefile it’s broken software.
[+] [-] unhammer|6 years ago|reply
ie. the Makefile will be kept up-to-date
[+] [-] datashaman|6 years ago|reply
[+] [-] brandmeyer|6 years ago|reply
[+] [-] thiht|6 years ago|reply
I recently used make in a side project[1] to implement a "full" continuous delivery pipeline and it really was refreshing, despite the syntactic quirks.
[1]: https://github.com/Thiht/smocker/blob/master/Makefile
[+] [-] boris|6 years ago|reply
[+] [-] Jeff_Brown|6 years ago|reply
http://www.conifersystems.com/whitepapers/gnu-make/
[+] [-] zwegner|6 years ago|reply
The entire build system is a single Python script that's less than 500 lines of code. Rather than trying to fit complicated rules into Make's arcane syntax, rules are specified with a Python script, a rules.py file (see [1]). But the script should be thought of more as a declarative specification: the rules.py file is executed once at startup to create the dependency graph of build outputs, and the commands to build them.
Yet, despite the small size, it's generally easier to specify the right dependencies, do code generation steps, and get full CPU utilization across many cores.
At some point I'd like to write more about make.py and try to get it used a bit more by the public...
[0]https://github.com/zwegner/make.py [1]https://github.com/zwegner/make.py/blob/master/example/rules...
[+] [-] ur-whale|6 years ago|reply
For everything else, it is absolutely horrible.
What I typically do is use make only for what it is good: as a dependency resolution back-end.
All the build logic for my projects is written in Python, in an executable file stored in the project root directory and called "make" (I have "." in my PATH).
The Python script, when it runs, generates on the fly a clean, lean, readable, unrolled, Makefile and feeds it directly to /usr/bin/make via a pipe.
Works like a charm:
Python (a sane and expressive programing language) to express the high-level logic needed to build the project.
Make as a solid back-end to solve the "what needs to be rebuilt" problem (especially the parallel version with -jXX)
[+] [-] m_mueller|6 years ago|reply
This way we can e.g. have repos using completely different technology stacks but the interface to them is the same - whether it's our database, a node.js webservice, a python data analytics tool etc. and the definition of these lifecycle commands in common.mk are totally trivial, they're just .phony. one-liner rules.
[+] [-] zem|6 years ago|reply
[+] [-] acqq|6 years ago|reply
I consider it a poor back-end for parallel execution, as it doesn't serialize the outputs.
I personally like ninja build tool as a very "low level" parallel "making" engine.
https://ninja-build.org/
"Command output is always buffered. This means commands running in parallel don’t interleave their output, and when a command fails we can print its failure output next to the full command line that produced the failure. "
[+] [-] de_watcher|6 years ago|reply
[+] [-] d0mine|6 years ago|reply
[+] [-] iandinwoodie|6 years ago|reply
[+] [-] peterwwillis|6 years ago|reply
I often use them as a wrapper for Terraform weirdness, where you may want to call an ADFS-enabled AWS login tool or not, depending on if `aws sts get-caller-identity` returns. Or assume a role before running all targets. Or extract values from a terraform.tfvars.json, to pass to the above two steps. Or bootstrap a remote backend if it doesn't exist. Or remove stale module symlinks. Or properly run init, get, and validate before running a plan or apply. Or document weird -target usage. The end result of just running make prep and make apply with no further knowledge required is exactly the experience I wanted out of Terraform initially.
[+] [-] jstrong|6 years ago|reply
[+] [-] solidsnack9000|6 years ago|reply
You can express so much in Make, and so quickly; but the expression is horrible and basically confusing.
[+] [-] bxparks|6 years ago|reply
My other use of Makefiles is to capture small (< ~5 lines) of bash, python or such scripts for doing certain things within a directory. I find that to be more efficient than documenting that sort of info in a README.md file.
[+] [-] gumby|6 years ago|reply
You can definitely do `make foo bar` and it will run the recipes for both foo and bar. You can also write a recipe with multiple prerequisites (which could be the result of a variable expansion).
Curious about the limitation; could be there's a way around it or that I never ran into it.
[+] [-] gumby|6 years ago|reply
[+] [-] kazinator|6 years ago|reply
Why does that pattern persist? I believe it is for these psycho-technical reasons.
1. The current directory "." is usually not in PATH for security reasons. But make ignores that; it reads a Makefile from the current directory.
The psychological hypothesis here is that people somehow like typing
compared to the no-Makefile alternative scripts: Something always feels off about running a program as ./name.2. If there are any shared make variables between the non-build utility steps like "make bundle" and actual build steps, then it's easier for those utility steps to be in the Makefile so they can interpolate the make variables. The scripted alternative would be to have shell variables in some "vars.sh" file that is sourced by all the commands. But then somehow the Makefile would have to pick those up also in some clean way, probably requiring a ./make wrapper:
So I think these are some of the main sources of the "pressure" for various project-related automated tasks to go into the Makefile.Another source of the pressure is that the "<command> <subcommand>" pattern is present elsewhere, like in version control tools "quilt push", "git blame", ...
It has the technical advantage of namespacing. If you have a make target called "ls", then "make ls" doesn't clash in any way with /bin/ls.
[+] [-] MisterTea|6 years ago|reply
[+] [-] henesy|6 years ago|reply
A small change, but being able to just do $foo instead of $(foo) is so nice.
[+] [-] kragen|6 years ago|reply
My most immediate work task for the morning is helping a colleague figure out why SCons is failing to build the JNI binding for our project, although the old makefile builds it fine. Sigh.
[+] [-] gravypod|6 years ago|reply
I with we all dropped makefiles and decided on a single build system I'm the bazel-lineage to lean on. The world would be a better place if everything came with BUILD files.
[+] [-] wojciii|6 years ago|reply
[+] [-] ashton314|6 years ago|reply
I run Makefiles in other places too. <3 Couldn't live without it.
I found this video helpful in learning how to (ab?)use Makefiles: https://www.youtube.com/watch?v=fkEz_oVh0B4
[+] [-] kstenerud|6 years ago|reply
I used make for decades, then switched to CMake, got burned too many times, and now I've moved on to Meson. There really isn't a good build system for C/C++, which is a shame :/
[+] [-] crucialfelix|6 years ago|reply
It's sane and simple. Written in go.
[+] [-] contingencies|6 years ago|reply
[+] [-] gigatexal|6 years ago|reply