My most horrible abuse of `make` was to write a batch job runner.
Most of the targets in the Makefile had a command to kick off the job and wait for it to finish (this was accomplished with a Python script since kicking off a job involved telling another application to run the job) followed by a `touch $@` so that make would know which jobs it had successfully run. If a process had dependencies these were declared as you'd expect.
The other targets in the Makefile lashed those together into groups of processes, all the way up to individual days and times. So "monday-9pm" might run "daily-batch", "daily-batch" would have "daily-batch-part-1" (etc), and each "daily-batch-part-..." would list individual jobs.
It was awful. It still is awful because it works so well that there's been no need to replace it. I keep having dreams of replacing it, but like they say there's nothing more permanent than a temporary solution.
All of this was inspired by someone who replaced the rc scripts in their init system with a Makefile in order to allow processes to start in parallel while keeping the dependencies in the right order.
> All of this was inspired by someone who replaced the rc scripts in their init system with a Makefile in order to allow processes to start in parallel while keeping the dependencies in the right order.
Sometimes the most interesting thing is not the story itself, but the story behind the story.
This has my interest peaked. Is there anywhere else I can read about this?
> All of this was inspired by someone who replaced the rc scripts in their init system with a Makefile in order to allow processes to start in parallel while keeping the dependencies in the right order.
Any sufficiently complicated init system contains an ad hoc, informally-specified, bug-ridden, slow implementation of half of systemd.
I did the same. Parallelization thanks to GNU make's `-j`, recoverability (only rerun the failed steps, not from scratch). If you use the remake fork of GNU make, you also get debugging and profiling for free.
My most horrible abuse of make was a distributed CI where I put a wrapper in the MAKE env var so that recursive make executions would invoke my wrapper which would enqueue jobs for remote workers to pick up
Slightly tangential but I've worked for several companies now that use `make` as a simple command runner, and I have to say it's been a boon.
Being able to drop into any repo at work and expect that `make init`, `make test` and `make start` will by convention always work no matter what the underlying language or technology is, has saved me a lot of time.
I've worked on a few projects that apply this pattern of using a Makefile to define and run imperative commands. A few people develop the pattern independently, then it gets proliferated through the company as part of the boilerplate into new repositories. It's not a terrible pattern, it's just a bit strange.
For many junior colleagues, this pattern is the first time they've ever encountered make -- hijacked as some kind of imperative command runner.
It's quite rare to run into someone who is aware that make can be used to define rules for producing files from other files.
I find it all a bit odd. Of course, no-one is born knowing about middle-aged build tools.
This was the nicest thing about blaze at google. I'm a big believer that having a single standard tool for things is a huge value add, regardless of what the tool is. I didn't really like blaze particularly, and I don't really like make particularly, but it's amazing to just have a single standard that everybody uses, no matter what it is.
Conventions are great, but that doesn't look like anything specific to make, a shell wrapper could do that:
#!/bin/sh
case $1 in
init)
... do whatever for each project init
;;
start)
... do whatever for each project start
;;
test)
... do whatever for each project tests
;;
*)
echo "Usage: $0 init|start|test" >&2
exit 1
;;
esac
In my home/personal projects I use a similar convention (clean, deploy, update, start, stop, test...), I call those little sh scripts in the root of the repo "runme".
The advantage could be, maybe, no need to install make if not present, and no need to learn make stuff if you don't know it.
Sometimes they don't match the usual words (deploy, start, stop, etc) but then I know that if I don't remember them, I just type ./runme and get the help.
For my scenario, it's perfect because of it's simplicity.
This is standard in Node.js ecosystem and I love it. Each package has scripts in package.json that you can run with npm run [name], and some of these like start, test or build (and more) are standardized. It's really great DX.
I did this for a while but make isn't well suited for this use case. What I ended up doing is have a shell script with a bunch of functions in it. Functions can automatically become a callable command (with a way to make private functions if you want) with pretty much no boilerplate code or arg parsing. You can even auto-generate a help menu using compgen.
The benefit of this is it's just shell scripting so you can use shell features like $@ to pass args to another command and everything else the shell has to offer.
For a task runner I really like just and its Justfile format: https://github.com/casey/just It is heavily inspired by make but doesn't focus on the DAG stuff (but does support tasks and dependencies). Crucially it has a much better user experience for listing and documenting tasks--just comment your tasks and it will build a nice list of them in the CLI. It also supports passing CLI parameters to task invocations so you can build simple CLI tools with it too (no need to clutter your repo with little one-off CLI tools written in a myriad of different languages).
If most of your make usage is a bunch of .PHONY nonsense and tricks to make it so developers can run a simple command to get going, check out just. You will find it's not difficult to immediately switch over to its task format.
The advice on output sentinel files for rules creating multiple files helps keep rebuilding dependencies reliable. Avoiding most of the cryptic make variables also helps Makefiles to remain easily understandable when you're not frequently working on them. And using .ONESHELL to allow multi-line statements (e.g. loops, conditional etc) is great. No need to contort things into one line. or escape line breaks.
Seems like you could even use a more serious programming language instead of sh/bash by setting SHELL to Python or similar. That may be a road to madness though...
Never loved make. First used it in the early nineties and found the syntax obscure and error messages cryptic.
My response to this article would be, if make is so great why did they have to invent 'configure' and 'xmkmf'? And why do people continue to create new build tools every couple of years?
Yeah, I mean I guess it worked, but unreasonably effective? Hardly.
> … why do people continue to create new build tools every couple of years?
Seems like a rite[0] of passage to some degree. Perhaps similar to people talking a stab at The Next Actually Correct CMS, and The Next Object System That Doesn’t Suck, or The Next Linux Distro For Smart People.
i've turned to cmake to do some really weird dependency management for various script calling. It's much more scriptable/friendly than make in its modern form but obviously no python :)
I once used GNU make to manage a large data pipeline on an 18-person project and it worked well.
We developed a lot of Python scripts. To manage them I created some helper tools to integrate them via make. People are welcome to reuse it, I got it released it as open source software. I named it make-booster:
https://github.com/david-a-wheeler/make-booster
From its readme:
"This project (contained in this directory and below) provides utility routines intended to greatly simplify data processing (particularly a data pipeline) using GNU make. It includes some mechanisms specifically to help Python, as well as general-purpose mechanisms that can be useful in any system. In particular, it helps reliably reproduce results, and it automatically determines what needs to run and runs only that (producing a significant speedup in most cases)."
"For example, imagine that Python file BBB.py says include CC, and file CC.py reads from file F.txt (and CC.py declares its INPUTS= as described below). Now if you modify file F.txt or CC.py, any rule that runs BBB.py will automatically be re-run in the correct order when you use make, even if you didn't directly edit BBB.py."
This is NOT functionality directly provided by Python, and the overhead with >1000 files was 0.07seconds which we could live with :-).
Make provides a way to handle dependencies as DAGs. Using it at scale requires that you call or write mechanisms to provide that DAG dependency info, but any such tool needs that info. Some compilers already come with generators for make, and in those cases it's especially convenient to use make.
I really like using make for data pipelines as you suggest, and thanks for pointing out your package.
In this pipeline use case, you have base data, and a series of transformations that massage it into usable results. You are always revising the pipeline, usually at the output end (but not always) so you want to skip as many preprocessing steps as possible. Make automates all that.
This works great for image processing pipelines, science data pipelines, and physical simulators for a few examples.
There have been a few blog posts and ensuing HN discussions about this use pattern for make. The discussion generally gets mixed up between make’s use as a build system for code, alas.
I'm trying to understand why so many people seems to hate make.
I hate building system that don't use Makefile, or that use but don't respect the variable convention. It makes really quite annoying to do things like changing allocation library, add compilers flags, etc.
Yeah. As far as I can tell, most people complaining about Make, and building its replacements haven't figured out how to use Make, or bothered to read the manual. It's really not that complicated...
As a build system make wasn't really designed to handle stuff like partial rebuilds, caching, or distributed building. Modern build systems like bazel are just orders and orders of magnitude faster and better for complex projects.
I'd encourage anyone thinking of using make to look at alternatives. Make is great, but is quickly becomes a ball of duct-tape. Make works very well when you spend the time to express your dependency tree, but realistically that never happens and people tend to add hacks upon hacks for Makefiles. Not only that, but they don't scale well as your project adds more components, such as integration testing, documentation, etc.
I found Earthly[0] to be a great replacement. Everything runs in Docker. Your builds are reproducible, cache-able, and parallelizable by default. I've heard Dagger[1] is another good tool in the space.
Its unfortunate that other build systems haven't taken over. Make is terrible for incremental builds and its reliance on binaries often means issues getting it to run and being very platform dependent. It is better than using a bat or shell file for the same purpose but its a long way behind many of the other language specific tools. I am surprised something better hasn't become popular, Make is the CVS of the build tools.
These days, I would absolutely not use Make to compile code written in C, except for the smallest personal projects. It is just too fussy to construct the Makefile correctly, in a way that you can get correct incremental builds. Nearly any other build system is better at building projects written in C, in the sense that it is easy & straightforward to get your build system to do correct incremental builds.
Honestly, I only find Makefiles useful when I have a tiny C/C++ project and need stuff just to compile quickly and easily without the overhead of a real build system.
For literally everything else, I found myself using it more as a task runner - and Make doesn't do a great job at it. You end up mixing Bash and Make variables, string interpolation, and it becomes really messy, really fast. Not to mention the footguns associated with Make.
I found bake (https://github.com/hyperupcall/bake) to suit my needs (disclaimer: I wrote it). It's literally just a Bash script with all the boilerplate taken care of you - what a task runner is meant to be imo
Make is solving many complicated tasks, like keeping in mind when to re-run some target (there is communication going on with the compilers that provide make .d files so that they know which source files influence a binary), or running job servers managing parallelism that also support nested sub-makes. But it also has many ugly warts. It's hard to design something that solves both those tasks as well as make does, and also as generalist as make does. Often they are solving a subset, what currently itches the main developer. But something that is both as general, and as comprehensive as make, those tools are rare. Ninja for example checks many of the boxes, but lacks make jobserver support.
> File-watcher or live-reloading. You can create a build/deploy loop fairly easily
When I worked with latex more, I kept a ~/.Makefile-latex and a shell function that would pretty much just do
inotifywait -e modify -e move --recursive --monitor . | while read; do make --makefile ~/.Makefile-latex; done
and I kept emacs and xpdf in side-by-side windows. Whenever I'd save a file in emacs (or xfig or whatever), a second later xpdf would refresh the pdf (and stay on the same page). It took away some of the pain of working with latex.
edit: I used this complicated setup instead of LyX or whatever other "(la)tex IDE" because I had ancillary files like xfig diagrams that would get compiled to .eps files and gnuplot scripts that would render data into graphs, and the makefile knew how to generate everything.
This is funny, because I just wrote a (fish) shell script that does this as well because all of the tex IDEs are so painful. Mostly because the entire efficiency of latex is that you're editing text and can do it in a text editor like emacs and move things around very quickly. I don't want a new interface!
But. I'm kind of proud. My shell script monitors the tex files for character changes and then once a (configurable) threshold of changes is met, it kicks off the compilation process. But the real game changer is that every time it compiles, if the compile is successful it commits the recent edits to a git branch. Then if I want, I can go through the git branch and see the entire history of edits for only versions of the document that compiled. It's a game changer in a big way. When I finish a section, I squash the minor edits into one commit and give it a good message and then commit the full thing to the main branch. Then there is where I can make sure my manuscripts look the way they should and do major revisions or collaborative edits.
The icing on the cake is that the fish script monitors for a special keypress and will take actions. So I hit "c" to compile the tex, "b" to rerun the massive bibliography compilations, "s" to squash the commits into the main branch, and "l" to display log errors. It's a dream! Now I don't think about compilation at all, or when I should commit something minor to git (and fiddle with the commands). I just type away and watch/request the pdf refresh when I need it... and _actually_ get work done. My god. So happy.
I like make. But these days to me the best part about it is that it’s a common entry point. Most popular languages come with their own make-esque tools that provide the same experience to developers and systems.
Tying together multiple projects, source from different locations, etc Id probably use make or a script.
The short article conflates popularity with quality. Windows 3.11 became the most sold os in history despite being utter trash. Make is popular because it was the first build system, not because it is not utter trash.
I love Make. It's a terrible tool for quite a few things, but it's awesome at the thing I use it most for - abstracting away complex series of shell commands behind one or two words. It's like shell aliases that can follow a repo anywhere.
make test
make format
make clean
make docker-stack
Fantastically useful stuff, even if all it's doing is calling language specific build systems in the background.
I wanted to learn makefiles to generate a static website.
Quickly ran into some massive limitations - one of which is that it completely broke apart when filenames had spaces in them.
"But why would you do that you're doing it wrong" - don't care wanted spaces in filenames.
Ended up switching Rake (a make like DSL written in ruby) and never looked back. Not only can you do all the make declarative stuff, but you get the full power of ruby to mess around with strings.
Anyone who finds make unreasonably effective must be working with GNU Make.
If I had to use some barely POSIX conforming thing from BSD thing or wherever, I'd instead write a write a top-to-bottom linear shell script full of conditionals.
[+] [-] Mister_Snuggles|3 years ago|reply
Most of the targets in the Makefile had a command to kick off the job and wait for it to finish (this was accomplished with a Python script since kicking off a job involved telling another application to run the job) followed by a `touch $@` so that make would know which jobs it had successfully run. If a process had dependencies these were declared as you'd expect.
The other targets in the Makefile lashed those together into groups of processes, all the way up to individual days and times. So "monday-9pm" might run "daily-batch", "daily-batch" would have "daily-batch-part-1" (etc), and each "daily-batch-part-..." would list individual jobs.
It was awful. It still is awful because it works so well that there's been no need to replace it. I keep having dreams of replacing it, but like they say there's nothing more permanent than a temporary solution.
All of this was inspired by someone who replaced the rc scripts in their init system with a Makefile in order to allow processes to start in parallel while keeping the dependencies in the right order.
[+] [-] scott_s|3 years ago|reply
[+] [-] enriquto|3 years ago|reply
Heh. The text that follows this sentence is likely the most beautiful and elegant use of a Makefile ever.
I love the humble bragging of this site.
[+] [-] josteink|3 years ago|reply
Sometimes the most interesting thing is not the story itself, but the story behind the story.
This has my interest peaked. Is there anywhere else I can read about this?
[+] [-] jbboehr|3 years ago|reply
Any sufficiently complicated init system contains an ad hoc, informally-specified, bug-ridden, slow implementation of half of systemd.
[+] [-] cerved|3 years ago|reply
[+] [-] fmajid|3 years ago|reply
[+] [-] ithkuil|3 years ago|reply
[+] [-] unknown|3 years ago|reply
[deleted]
[+] [-] Nexialist|3 years ago|reply
Being able to drop into any repo at work and expect that `make init`, `make test` and `make start` will by convention always work no matter what the underlying language or technology is, has saved me a lot of time.
[+] [-] shoo|3 years ago|reply
For many junior colleagues, this pattern is the first time they've ever encountered make -- hijacked as some kind of imperative command runner.
It's quite rare to run into someone who is aware that make can be used to define rules for producing files from other files.
I find it all a bit odd. Of course, no-one is born knowing about middle-aged build tools.
[+] [-] sanderjd|3 years ago|reply
[+] [-] txutxu|3 years ago|reply
The advantage could be, maybe, no need to install make if not present, and no need to learn make stuff if you don't know it.
Sometimes they don't match the usual words (deploy, start, stop, etc) but then I know that if I don't remember them, I just type ./runme and get the help.
For my scenario, it's perfect because of it's simplicity.
[+] [-] ReadTheLicense|3 years ago|reply
[+] [-] nickjj|3 years ago|reply
The benefit of this is it's just shell scripting so you can use shell features like $@ to pass args to another command and everything else the shell has to offer.
I've written about this process at https://nickjanetakis.com/blog/replacing-make-with-a-shell-s... and an example file is here https://github.com/nickjj/docker-flask-example/blob/main/run.
[+] [-] pak9rabid|3 years ago|reply
That command (to run an Angular/nodejs dev instance has staved off carpel-tunnel syndrome for me for maybe another 5 years.
[+] [-] qbasic_forever|3 years ago|reply
If most of your make usage is a bunch of .PHONY nonsense and tricks to make it so developers can run a simple command to get going, check out just. You will find it's not difficult to immediately switch over to its task format.
[+] [-] h4l|3 years ago|reply
The advice on output sentinel files for rules creating multiple files helps keep rebuilding dependencies reliable. Avoiding most of the cryptic make variables also helps Makefiles to remain easily understandable when you're not frequently working on them. And using .ONESHELL to allow multi-line statements (e.g. loops, conditional etc) is great. No need to contort things into one line. or escape line breaks.
Seems like you could even use a more serious programming language instead of sh/bash by setting SHELL to Python or similar. That may be a road to madness though...
[+] [-] JackFr|3 years ago|reply
My response to this article would be, if make is so great why did they have to invent 'configure' and 'xmkmf'? And why do people continue to create new build tools every couple of years?
Yeah, I mean I guess it worked, but unreasonably effective? Hardly.
[+] [-] falcolas|3 years ago|reply
Cross-architecture and linux distro compatibility, mostly.
[+] [-] proxyon|3 years ago|reply
[+] [-] bch|3 years ago|reply
Seems like a rite[0] of passage to some degree. Perhaps similar to people talking a stab at The Next Actually Correct CMS, and The Next Object System That Doesn’t Suck, or The Next Linux Distro For Smart People.
[0] edit: corrected “right V. rite” per https://news.ycombinator.com/item?id=32442473
[+] [-] jhallenworld|3 years ago|reply
[+] [-] stjohnswarts|3 years ago|reply
[+] [-] dwheeler|3 years ago|reply
We developed a lot of Python scripts. To manage them I created some helper tools to integrate them via make. People are welcome to reuse it, I got it released it as open source software. I named it make-booster: https://github.com/david-a-wheeler/make-booster
From its readme:
"This project (contained in this directory and below) provides utility routines intended to greatly simplify data processing (particularly a data pipeline) using GNU make. It includes some mechanisms specifically to help Python, as well as general-purpose mechanisms that can be useful in any system. In particular, it helps reliably reproduce results, and it automatically determines what needs to run and runs only that (producing a significant speedup in most cases)."
"For example, imagine that Python file BBB.py says include CC, and file CC.py reads from file F.txt (and CC.py declares its INPUTS= as described below). Now if you modify file F.txt or CC.py, any rule that runs BBB.py will automatically be re-run in the correct order when you use make, even if you didn't directly edit BBB.py."
This is NOT functionality directly provided by Python, and the overhead with >1000 files was 0.07seconds which we could live with :-).
Make provides a way to handle dependencies as DAGs. Using it at scale requires that you call or write mechanisms to provide that DAG dependency info, but any such tool needs that info. Some compilers already come with generators for make, and in those cases it's especially convenient to use make.
[+] [-] mturmon|3 years ago|reply
In this pipeline use case, you have base data, and a series of transformations that massage it into usable results. You are always revising the pipeline, usually at the output end (but not always) so you want to skip as many preprocessing steps as possible. Make automates all that.
This works great for image processing pipelines, science data pipelines, and physical simulators for a few examples.
There have been a few blog posts and ensuing HN discussions about this use pattern for make. The discussion generally gets mixed up between make’s use as a build system for code, alas.
[+] [-] bXVsbGVy|3 years ago|reply
I hate building system that don't use Makefile, or that use but don't respect the variable convention. It makes really quite annoying to do things like changing allocation library, add compilers flags, etc.
[+] [-] dima55|3 years ago|reply
[+] [-] qbasic_forever|3 years ago|reply
[+] [-] shepherdjerred|3 years ago|reply
I found Earthly[0] to be a great replacement. Everything runs in Docker. Your builds are reproducible, cache-able, and parallelizable by default. I've heard Dagger[1] is another good tool in the space.
[0]: https://earthly.dev/
[1]: https://dagger.io/
[+] [-] jhallenworld|3 years ago|reply
https://scottmcpeak.com/autodepend/autodepend.html
And this, avoid recursive make:
https://accu.org/journals/overload/14/71/miller_2004/
[+] [-] PaulKeeble|3 years ago|reply
[+] [-] klodolph|3 years ago|reply
These days, I would absolutely not use Make to compile code written in C, except for the smallest personal projects. It is just too fussy to construct the Makefile correctly, in a way that you can get correct incremental builds. Nearly any other build system is better at building projects written in C, in the sense that it is easy & straightforward to get your build system to do correct incremental builds.
[+] [-] hyperupcall|3 years ago|reply
For literally everything else, I found myself using it more as a task runner - and Make doesn't do a great job at it. You end up mixing Bash and Make variables, string interpolation, and it becomes really messy, really fast. Not to mention the footguns associated with Make.
I found bake (https://github.com/hyperupcall/bake) to suit my needs (disclaimer: I wrote it). It's literally just a Bash script with all the boilerplate taken care of you - what a task runner is meant to be imo
[+] [-] est31|3 years ago|reply
[+] [-] philsnow|3 years ago|reply
When I worked with latex more, I kept a ~/.Makefile-latex and a shell function that would pretty much just do
and I kept emacs and xpdf in side-by-side windows. Whenever I'd save a file in emacs (or xfig or whatever), a second later xpdf would refresh the pdf (and stay on the same page). It took away some of the pain of working with latex.edit: I used this complicated setup instead of LyX or whatever other "(la)tex IDE" because I had ancillary files like xfig diagrams that would get compiled to .eps files and gnuplot scripts that would render data into graphs, and the makefile knew how to generate everything.
[+] [-] kwantam|3 years ago|reply
I usually add something like this command as the `auto` target in my latex Makefiles, which works pretty nicely.
[+] [-] earthscienceman|3 years ago|reply
But. I'm kind of proud. My shell script monitors the tex files for character changes and then once a (configurable) threshold of changes is met, it kicks off the compilation process. But the real game changer is that every time it compiles, if the compile is successful it commits the recent edits to a git branch. Then if I want, I can go through the git branch and see the entire history of edits for only versions of the document that compiled. It's a game changer in a big way. When I finish a section, I squash the minor edits into one commit and give it a good message and then commit the full thing to the main branch. Then there is where I can make sure my manuscripts look the way they should and do major revisions or collaborative edits.
The icing on the cake is that the fish script monitors for a special keypress and will take actions. So I hit "c" to compile the tex, "b" to rerun the massive bibliography compilations, "s" to squash the commits into the main branch, and "l" to display log errors. It's a dream! Now I don't think about compilation at all, or when I should commit something minor to git (and fiddle with the commands). I just type away and watch/request the pdf refresh when I need it... and _actually_ get work done. My god. So happy.
I just finished this today.
[+] [-] andreareina|3 years ago|reply
[+] [-] thinkingkong|3 years ago|reply
Tying together multiple projects, source from different locations, etc Id probably use make or a script.
[+] [-] bjourne|3 years ago|reply
[+] [-] BiteCode_dev|3 years ago|reply
Seems like one version out of 2 of windows being trouble stills stand.
[+] [-] LtWorf|3 years ago|reply
[+] [-] falcolas|3 years ago|reply
[+] [-] LAC-Tech|3 years ago|reply
Quickly ran into some massive limitations - one of which is that it completely broke apart when filenames had spaces in them.
"But why would you do that you're doing it wrong" - don't care wanted spaces in filenames.
Ended up switching Rake (a make like DSL written in ruby) and never looked back. Not only can you do all the make declarative stuff, but you get the full power of ruby to mess around with strings.
[+] [-] rightbyte|3 years ago|reply
And the feature bloat slippery slope is sliding towards Bazel!
[+] [-] kazinator|3 years ago|reply
If I had to use some barely POSIX conforming thing from BSD thing or wherever, I'd instead write a write a top-to-bottom linear shell script full of conditionals.
[+] [-] davidpfarrell|3 years ago|reply
https://github.com/TekWizely/run
It feels like a makefile but is optimized managing and invoking small tasks and wrappers, and auto-generates help text from comments.