Funny how building tools are reinvented again and again, with all the same culprits. I'm a programmer since the mid-90's, there are countless tools to do the job, and they are just as cumbersome and complicated the more they evolve. I'm personally sticking to the good old configure/make for my own C projects, but I understand that's a matter of taste and habits. I've written some CMakeLists.txt from time to time, anything you want to do requires some digging in some sort of stackoverflow answers (thanks to those who answer) for "not so common but yet I need them" weird features.
I'm not criticizing anyone or any tool, just that we can't seem to find some good way to tool the compilation of both trivial and very complex projects, without taking an hour or two off.
I also understand that I do not want to work on such an endeavor, that seems boring AF and kudos for people that like that. You are not enough it seem :)
Plan 9’s mk isn’t really a radical departure from Make, it is more of an adaptation of it to the (decidedly non-POSIX) Plan 9 shell, rc, with modest extensions. One design mistake in make that mk fixes is variables in recipes: those are now passed as environment variables, with no prior substitution, so no more writing $$$$ to get the current PID in a recipe (and it’s written $pid in rc anyway).
Unfortunately, mk inherits from rc the principle of having the list of strings as the fundamental datatype (also used by Jam). That works, but it’s noticeably more limiting than Tcl’s route of having everything be strings but with robust quoting and unquoting procedures for putting lists inside them—at which point Tcl starts to look like a Lisp-2 with a slight propensity for stringiness and a mildly unusual syntax.
It seems to me that all build systems are just DAGs with syntactic sugar and functions. You could do away with build systems entirely if there were some Unix commands that manage an arbitrary DAG, where the state is kept out of band (in a file, in a database, etc) so you can operate on the DAG from any process. That way the shell (or any program, really) becomes your build system and you can compose any kind of logic that requires walking or manipulating a tree of dependencies. This could apply to anything where you need to execute arbitrary jobs with a DAG, not just builds.
> It seems to me that all build systems are just DAGs with syntactic sugar and functions.
True in the broadest sense, but there are choices to be done regarding the possibility of discovering what the graph is or has become on the fly and the propagation directions. See “Build systems à la carte”[1,2] for a systematic exploration.
(See also a neighbouring comment[3] re how the discourse structure[4] of the build script might be important in a way orthogonal to these execution-engine issues. The boundary between the build system and the build tool proper can be drawn in very different places here.)
Nix represents its builds in a way that's similar: each "derivation" is a text file like /nix/store/XXXXXXX-foo.drv, where XXXXXX is a hash of that file's content. This way each file can reference any others by their path, and there can never be cycles (unless we brute-forced SHA256 to find a pair of files which contain each others hashes!). This uses of hashing requires the files to be immutable, but that's good for caching/validation/reuse/etc. anyway.
Note that Nix doesn't use a shell to execute things, it uses raw `exec` calls (each .drv file specifies the absolute path to an executable, a list of argument strings, and a set of environment variable strings). Though in practice, most .drv files specify a bash executable ;)
> It seems to me that all build systems are just DAGs
Yeah, well, it's a little bit more than that.
Two things come to mind that don't neatly fit the DAG mental framework:
- dynamically generated dependencies (e.g. when you compile a C++ file only to discover that it #includes something and therefore has a dependency on that thing, and therefore the DAG has to be updated on the fly). Creating them by hand is horribly tedious, and/or borderline impossible (#includes that #include other #include ad infinitum)
- reproducible builds, where a build system is capable of rebuilding a binary from scratch down to having not a single different bit in the final output assuming the leaves of the DAG haven't changed. A desirable feature that is darn near impossible to do unless you pair the DAG with something else.
Just isn't really a replacement for make outside of the "simple command/task runner". Make is really a lot more powerful and has things like files system driven dependency handling.
I've been using 'just' for running a set of commands that I can't be bothered to remember.
It makes it easier to come back to a project after a few weeks, because you don't have to remember the N commands you were using to iterate/test, you only have to remember the 'just' invocation.
For some reason, it feels like a more natural fit for this than 'make'
Was going to mention this. I've recently started converting my Makefiles to justfiles, and it's just nicer. Even being able to inline scripts to clean up "loose" files is a big win.
With sub-second build times for individual targets, this causes mk to needlessly recompile files because the target may have the same mtime as the prerequisites.
I own a couple books on Make and it's a great tool, but I learned how to use Mk by just reading the manpage. It's a huge improvement and simplification at the same time.
There's a solid, stand-alone implementation of mk in golang. No plan9 environment needed.
To add to the pile of task runners here, I made this one: https://code.ofvlad.xyz/v/lightning-runner . I like it, and currently use it for all personal projects. Mainly, posting here for some feedback!
Research Unix wasn't easily available, AFAIK Bell Labs were trying to sell it as a product, so it lost to more free alternatives. It only got open-sourced recently AFAIK, years after being abandoned.
[+] [-] jylam|2 years ago|reply
I'm not criticizing anyone or any tool, just that we can't seem to find some good way to tool the compilation of both trivial and very complex projects, without taking an hour or two off.
I also understand that I do not want to work on such an endeavor, that seems boring AF and kudos for people that like that. You are not enough it seem :)
[+] [-] mananaysiempre|2 years ago|reply
Unfortunately, mk inherits from rc the principle of having the list of strings as the fundamental datatype (also used by Jam). That works, but it’s noticeably more limiting than Tcl’s route of having everything be strings but with robust quoting and unquoting procedures for putting lists inside them—at which point Tcl starts to look like a Lisp-2 with a slight propensity for stringiness and a mildly unusual syntax.
[+] [-] throwawaaarrgh|2 years ago|reply
[+] [-] AceJohnny2|2 years ago|reply
For an excellent synthesis of what Makes a build system, I can't recommend the 2018 paper "Build Systems A La Carte" enough:
https://www.microsoft.com/en-us/research/uploads/prod/2018/0...
By Andrey Mokhov, Neil Mitchell (now working at Meta on the Buck2 build system) and Simon Peyton Jones (one of the founders of Haskell)
[+] [-] mananaysiempre|2 years ago|reply
True in the broadest sense, but there are choices to be done regarding the possibility of discovering what the graph is or has become on the fly and the propagation directions. See “Build systems à la carte”[1,2] for a systematic exploration.
(See also a neighbouring comment[3] re how the discourse structure[4] of the build script might be important in a way orthogonal to these execution-engine issues. The boundary between the build system and the build tool proper can be drawn in very different places here.)
[1] https://dx.doi.org/10.1145/3236774
[2] https://youtu.be/BQVT6wiwCxM
[3] https://news.ycombinator.com/item?id=36749885
[4] https://brenocon.com/blog/2009/09/dont-mawk-awk-the-fastest-...
[+] [-] chriswarbo|2 years ago|reply
Note that Nix doesn't use a shell to execute things, it uses raw `exec` calls (each .drv file specifies the absolute path to an executable, a list of argument strings, and a set of environment variable strings). Though in practice, most .drv files specify a bash executable ;)
[+] [-] ur-whale|2 years ago|reply
Yeah, well, it's a little bit more than that.
Two things come to mind that don't neatly fit the DAG mental framework:
[+] [-] wahern|2 years ago|reply
There is: tsort. It's a POSIX utility, even, not just a GNU or BSD utility.
[+] [-] smorrow|2 years ago|reply
[+] [-] duped|2 years ago|reply
[+] [-] ripe|2 years ago|reply
I used this tutorial:
https://nullprogram.com/blog/2017/08/20/
[+] [-] raddan|2 years ago|reply
https://www.usenix.org/conference/atc22/presentation/curtsin...
(full disclosure: I am one of the authors)
[+] [-] imran-iq|2 years ago|reply
[+] [-] TheLocehiliosan|2 years ago|reply
https://github.com/casey/just
[+] [-] packetlost|2 years ago|reply
[+] [-] dimator|2 years ago|reply
It makes it easier to come back to a project after a few weeks, because you don't have to remember the N commands you were using to iterate/test, you only have to remember the 'just' invocation.
For some reason, it feels like a more natural fit for this than 'make'
[+] [-] DanHulton|2 years ago|reply
[+] [-] vmsp|2 years ago|reply
https://9fans.github.io/plan9port/
[+] [-] jacobvosmaer|2 years ago|reply
https://github.com/9fans/plan9port/blob/cc4571fec67407652b03...
With sub-second build times for individual targets, this causes mk to needlessly recompile files because the target may have the same mtime as the prerequisites.
[+] [-] whiteinge|2 years ago|reply
There's a solid, stand-alone implementation of mk in golang. No plan9 environment needed.
https://github.com/henesy/mk
[+] [-] vladxyz|2 years ago|reply
[+] [-] unknown|2 years ago|reply
[deleted]
[+] [-] adamgordonbell|2 years ago|reply
https://github.com/adamgordonbell/job-runner/blob/main/tests...
[+] [-] gigatexal|2 years ago|reply
[+] [-] nsajko|2 years ago|reply