God I hate make. I hate make so very, very much. Compiling code isn't that hard. I swear to the programming lords on high it isn't.
Here's a wonderfully useful open source project - Google gperftools [1]. It includes TCMalloc amongst other things. The Makefile.in is 6390 lines long. Configure is 20,767 lines long. Libtool is 10247 lines long. That's fucking insane.
Compiling OpenSource is such a pain in the ass. Particularly if you trying to use it in a cross-platform Windows + OS X + Linux environment. One of my favorite things about iOS middleware is that one of their #1 goals is to make it as easy as possible to integrate into your project. Usually as simple as add a lib, include a header, and call one single line of code to initialize their system (in the case of crash report generation and gathering).
I work on a professional project. We have our own content and code build pipeline. It supports many platforms. I don't want anyone's god damn make bullshit. I want to drop some source code and library dependencies into my source tree that already exists and click my build button that already exists.
What you are describing is autoconf, not make. Make by itself is actually a very handy tool for performing tasks that have a dependency graph.
Autoconf.. Well, I can't disagree. It's a hack built on top of a hack and should probably be rethought. Once autoconf is done generating Makefiles, make itself is generally trouble-free.
That's a GNU autotools setup, not a pure Makefile setup. A simple Makefile setup is much much easier. A current C project I'm working on has around 150 lines worth of Makefile stuff, and that includes both compiling source, running flex over lexer files, an option for creating a .tar.gz of the current source, and also an option to run tests on the compiled object files. It's also dead simple to maintain (The only time it requires a direct update is when adding test cases, and that's because you have to specify which source .c files to test).
Now to be fair, I am sacrificing some options here. The biggest is that autotools runs tests on the installation environment and tests the available functions and standards compliance, which in theory allows compiling the source on any system with has autotools for it, which is why it's so huge. You can't do that with standard Makefiles. I just stick close to the standard and avoid any non-standard extensions that I don't need.
Autotools is crap. Configure on it's own isn't godawful -- it scans for a LOT of Unix variants, and does sniff for features reasonably well. On my more cynical days, I'd say it does a good job of making things portable from unix to unix. Everything else about autotools is unimitigated crap, though.
I've written my own generic make library. The library itself is 95 lines of script, and handles all the dependency sniffing, library and binary building, and so on that I need in a reusable way.[1]
The makefiles themselves just list the sources and object files. They're typically only a few lines, like so[2]:
BIN = mybinary
OBJ = foo.o bar. baz.o
include ../mk/c.mk
I've come to think that build systems are a very personal utility. Everyone has their favourite. Mine's fabricate.py, for example. Over time I've built up a library of script snippets and shortcuts and so on which I'm familiar with, comfortable with, and exactly fulfil all my use cases. It's all very clever. :)
Yet when I download some random project's source code, I groan at any sophistry in the build process at all. I'm not interested in your build system - I'm interested in the application itself. Maybe I want to try and fix a bug, or have a half-baked idea for a new feature. I don't need dependency checking, incremental rebuilding, parallel building, and all that stuff you get from a fully operational build system at this point. I only need to build the project - once - as I decide whether to stick around. Sure, if I start working on it for serious, rebuilding over and over - then I'll bother to learn the native build system, and read any complicated scripts. Build systems are an optimization for active developers. They're a utility that is supposed to save time.
Of course, you're never going to get everyone in the world to agree on the same build system. We all have different desires and needs for what machines it should run on, how automated, how much general system administration it should wrap up, how abstractly the build should be described, etc. It's a bit like one's dot files or choice of text editor - my ideal build is tailored just for me but I wouldn't expect it to satisfy anyone else.
So now I wish that everyone who distributes software as source code would do this: include a shell script that builds the project. Just the list of commands, in the order that they are executed, that carries out a full build on the author's system. That's what it comes down to, in the end, isn't it? Your fancy build system should be able to log this out automatically. (Of course then you still include all the fancy build stuff as well, for those interested.)
Of course it's extremely unlikely that your shell script will work on my system without modification. There's probably machine-specific pathnames in there for a start. We might not even use the same shell! It's basically pseudocode. But if I'm faced with a straight list of imperative shell commands that doesn't work, and a program of some sort with its own idiosyncratic syntax and logic and a hundred-page manual and the requirement for me to install something - which also doesn't "just work" - well, as long as you know how to call your compilers and linkers and so on - which you should - the former is going to be easier to tweak into submission, to get that first successful build. After all, if I need much more than that I'll probably just recreate the build in my favourite system anyway.
Count me in as a Makefile hater. I'm even using it to manage a portable /home directory and it's just making me hate it even more. Why did all the alternatives have to fail or be worse than Make?
With the theme of ageism and NIH being brought up a lot lately, I'm glad to see that the neck-bearded unix philosophy (tm) of composition and single purpose are winning over these complicated object oriented frameworks.
Exactly my thoughts too, after reading the ageism article. An "old guy" (like me) would not even think twice and just "make" great use, as he has always been and with great success nor the time-consuming need of learning a possibly "unpolished" tool that does not have all the bells and whistles. This is very similar to the NoSQL trap, and countless other technology short-cuts you have to run into before fully understanding why they might be a trap for your org. And its the same reason why we old guys are sometimes disliked - for being critical about all these shiny, new "reinventing the wheel" tools. A tool that is not a decade old in many cases simply isn't battle proof. The problems with us "old ones" maybe starts if we make this a rule rather than a principle of caution, though...
I'm not sure I would go as far as calling Grunt a complicated OO framework, but the first thing that crossed my mind when looking at Grunt was 'why not use make?' It may not be perfect, but at least you get incremental builds.
The problem with make is not that it's bad, it's that's it's only really good at doing two things:
1) mapping a source pattern to an output pattern
2) managing dependencies between rules
To be fair it's good at those, and often the sorts of things you can do with a rule are quite complex (being basically shell scripts).
However, the problem is that 1) it's an obscure DSL and 2) that it is really rubbish at doing more complicated things.
For example, grunt-contrib-clean lets you: delete any files that match a regex, not leave alone any files that match a different regex. Grunt also has a built in templating language that can be used to expand configuration files from submodule into local build scripts without copying the entire gruntfile. grunt-open launches a browser to a dev url cross platform. The list goes on and on and on.
Make is terribly terribly bad at complex tasks like this, that's the problem.
You can write a custom shell-script / ruby-script / python-script for these tasks, but why would you? Someone else already has. Dont repeat all the things every time with your own code.
If all you need to do is map .c to .o, or .scss to .css and .coffee to .js, use make, totally. It's good at that.
There are other benefits to make. First, you'll need to ask users to install grunt, whereas make is standard and is just there (except for Windows, probably). Second, make provides a language that is optimised to express the information it needs, whereas grunt uses a JSON file. It gave me headaches when I needed to edit grunt config at my last job, and I never touched grunt ever since. Third, all the tasks you've specified can be done with standard UNIX utilities, whereas you depend on third-party libraries with grunt. Also, I do not think that any of those tasks are complex.
I'd rather rewrite a 4-LOC shell script in my new project, instead of depending on a build tool that depends on a non-standard, infant runtime itself, and also depends on third party libraries for deleting files.
I'll offer my own input here, goaljobs[1]. (By the way I don't recommend people to use goaljobs unless you're prepared for a lot of assembling and are interested in understanding what it does -- it's not easy to use at all).
You can break down complex tasks like "Has my software been delivered through the app store" down to goals that have to be fulfilled by carrying out (recursively) many layers of rules, like "did it pass human evaluation?".
I'm glad to see the resurgence in make's popularity among front-end developers. It really is a great tool for not just building apps, but generating files that depend on other files in a declarative way. I manage my website with just make, pandoc, and rsync; and I manage my various ssh configurations with make and m4 (ssh config doesn't have an include directive!). A while ago I wrote an article to help out some front-end dev friends get acquainted with make, maybe someone else will find it useful: http://justinpoliey.com/articles/make-for-front-end-developm...
It's also very handy for anything you write in LaTeX: add figures etc. as dependencies, make targets that run R or gnuplot to produce the figures in the right format, etc.
I've (ab)used apenwarr's redo for a couple big data processing projects, with mixed results.
One was a news recommendation engine. We pulled down and parsed RSS feeds, crawled every new link they referred to, crawled thumbnails for each page, identified and scraped out textual content from pages, ran search indexing on the content, ran NLP analysis, added them to a document corpus, ran classifiers and statistical models, etc.
Every step of the way took some input files and produced an output file. We used programs written in many different languages -- whatever was best for the job.
So a build system was the obvious way to structure all of this, and we needed a build system we could push pretty hard. Our first version used make and quickly ran into some limitations (essentially, we needed more control over the dependency graph than was possible with the static, declarative approach) so we turned to redo, which lets you write build scripts in the language of your choice.
One thing we needed almost immediately was more powerful pattern matching rules than make's % expansion. No problem: invent a pattern syntax and a special mode where every .do script simply announces what patterns it can handle. Collect patterns, iteratively match against what's actually in the filesystem, and then you've got the list of target files you can build. (This already differs from make, which wants you to either specify the targets explicitly up front as "goals," or enumerate their dependencies via a $(shell ...) expansion and then string transform them into a list of targets which are ALSO matched by some pattern rule somewhere...okay you get it, it's make, it's really disgusting.)
Another thing we needed was to say, here's a list of target files that appear in the dependency graph, give me them in topologically sorted order. This allowed us to "compact" datasets as they became fragmented, without disturbing things downstream from them in the dependency graph. Again, this was not difficult with redo once we had some basic infrastructure.
Now, was all of this maintainable, or was it just kind of insane? I think in the end it ended up somewhat insane, and most importantly, it was an unfamiliar kind of insane. The insanity that you encounter in traditional Makefiles is at least well understood. And treatable.
With redo, you can do almost anything with your build. You can sail the seven seas of your dependency graph. It's awesome. It's also terrifying, because there is very little to guide you, and you may very well be in uncharted waters.
Inspired somewhat by redo, I wrote "credo" as a set of small command-line build tools, so build description files are in the shell language rather than a standalone DSL.
(I don't like how makefiles have so many features that reimplement what you can do in the shell. I also don't care for big languages with build-tool DSLs--though you could say credo is a build-tool DSL for the shell, like git is a version-control DSL for the shell. With only language directives, no constructs.)
I wrote it in the Inferno shell to take advantage of some nice OS features and its cleaner shell language. One of these days I should port it to bash, so other people might use it.
I couldn't agree more here. After reading the source code on the twitter css bootstrap makefile a few years ago I got inspired and wrote my own rewrite of what they had... I was blown away by the gains at it gave me. Of note I bound make to cmd+b in Sublime Text, and I just compile when I see fit.
I've now tweaked my make files so they are almost unrecognisable from the twitter ones, it can run PHP, JS unit tests, fires up phantomJS and tests individual modules, can release minified for production, or not for debugging purposes. I can't stress how useful it is being able to just add a line and you get such powerful support.
I haven't had time to add git hooks yet, but thats the next stage, I plan to set the hooks to run tests and clamp down on poor quality code (I work with interns quite often... sometimes I cry for just average quality code coming in).
For a story of the production gains I've had. I moved the whole company CSS into a CSS preprocessor and cleaned up all the existing structures to fit the make file release procedure. It came back a thousand fold away a rebranding and I had everything done in two days. I was blown away with that alone, I've been involved in so many rebrands over the years that go on for months... not hours.
Do it, if you're bit unsure where to start check the twitter bootstrap build file and muck around with what they have done.
It seems most of the twitter-style web developer community who happily replaced `make` with `rake` when developing for Rails simply followed that with `grunt` when they moved to JavaScript.
While I'm a big fan of the Unix Philosophy, I've found gulp to be incredibly satisfying as a build tool. I cd into my project directory and run gulp, then all my SCSS/CS gets compiled, watched, and re-compiled on change, with LiveReload pushing style changes straight to the browser; and whenever I make a change to my server (Node.js) code, it gets compiled and run against my tests. Then when I'm ready to push to production, I run gulp build and then push it up :) Oh, and my whole gulpfile is <100 lines, a fair amount of which is whitespace/stylistic. It's incredibly readable and flexible: composing complex supertasks is easy.
I don't have anything against Make, I've never used it; judging by the code samples in the article, however, and contrasting that with my gulpfile (which, it's worth noting, didn't require me to learn a new language/DSL, just a dead-simple API), I feel much more empowered by gulp than make.
Somewhat relevant: I've also found gulp much easier to use and maintain than Grunt.
I like Gulp a lot, but I'd hardly say it's perfect. It's weird, it doesn't really make sense that everything is a stream. For example, why is watching something a stream?
The thing I like about Gulp is that all it needs is NPM to install all its stuff. Make you have to worry about different system libraries, etc. Also, with gulp I can leverage other node based libraries, so for one project, I can get frontend guys set up using a dev server which automatically both proxies API requests to a backend server, and also watches and livereloads files (as they like), with three damn commands - npm install && bower install && gulp dev. Bam. Working dev server running on port 8000 that does all that they need it to.
For simple builds with dependencies, I find makefiles hard to beat. The only thing that has me pulling out my hair every time is the whole tabs vs. spaces thing, especially when on a machine that I haven't pulled in my .vimrc yet.
Count me in as a fan of makefiles too. Haven't done much with them for frontend stuff but will have to give it a shot. My biggest problem with make is that, kind of like javascript, it's been around for so long it's really hard to find good information on it. At least the core functionality really hasn't changed all that much so even very old information is still quite relevant.
I use make together with a markdown compiler, and the m4 preprocessor, to keep devdocs up to date, in one huge document, where everything is included, and the various sections as stand-alone docs. The markdown version of the section files is almost uncluttered from m4 and html. I link from any external doc to any other via links in the toc in the main doc index.html, to keep everything as simple as possible. It's sweet.
I love make too, but still settled for Gulp (and previously Grunt). Writing a Makefile just isn't for the faint of heart and can be very frustrating for front-end folks.
It's a trade-off between simplicity and convenience.
I think a lot of FE devs aren't fully aware of npm scripts https://www.npmjs.org/doc/misc/npm-scripts.html - it's a handy way of calling commands just like you would in make. Quite a lot of the tasks that people use Grunt/Gulp for could be executed with it.
I've been using make for front-end builds for a while.
two tools I've been finding to be indisposable are jq and mustache templates. I made a php based mustache/cli implementation here:
This enables you to put a lot of your configuration in a json file, such as actual lists of files (which make is actually terrible at), have a decent way of getting json data out into make processes, and build files templated out of that json configuration, such as ssi files that set whether to load built JS or seperate script tags (also generated out of the same json config).
SO you get the declarative stuff in the declarative JSON format, and the stuff that make is good at: incremental builds, stays in make.
OK, I get the simplicity of using make, but, yikes, creating your own "little programs" that parse AND EDIT (?!?!) code is a simple solution? IMO the author glossed over that part very smoothly without even bringing up the potential pitfalls (bugs in author's "tiny little programs", invalid html causing the parser to barf, etc etc etc). Really I think you'd have to be pretty stubborn to not see the value of Grunt when you decide that you need to implement an HTML parser as a substitute for if-statements. Lol geeze.
I last used it 7 years ago. It was nice, but extremely slow, and required more tweaking than originally seemed. I don't know if either thing improved in the last 7 years.
I've started using Make to build and run docker containers in the dev environment. It's not the perfect tool, and neither are my skills that great with Make, but it sure gets the job done and enables docker commands to be shared between devs. For example this Makefile (still under development) for PostgreSQL https://github.com/GlobAllomeTree/docker-postgresql/blob/mas...
You can't write a blog article about make while comparing it to a bunch of Node.js build tools and not mention the word Windows once. Make is good for you because you're not an open-source project built on a cross-platform development platform. All of your employees use Macs and your servers are Linux. For OS Node projects make is simply out of the question because it is janky on Windows and every makefile ever written is full of bashisms.
Certainly you can. In fact, the OP did exactly that and his article is at the top of HN.
My hope is that we can just carry on with our dev work and Windows will slowly fade away as a platform for developing anything but SharePoint intranets...
[Note: I don't have a neckbeard, but I do run Debian testing.]
You will save so much time developing node if you install Ubuntu with virtual box and vagrant. All are free, and then you can really participate in the os world without having to hack everything to work on windows.
[+] [-] forrestthewoods|12 years ago|reply
Here's a wonderfully useful open source project - Google gperftools [1]. It includes TCMalloc amongst other things. The Makefile.in is 6390 lines long. Configure is 20,767 lines long. Libtool is 10247 lines long. That's fucking insane.
Compiling OpenSource is such a pain in the ass. Particularly if you trying to use it in a cross-platform Windows + OS X + Linux environment. One of my favorite things about iOS middleware is that one of their #1 goals is to make it as easy as possible to integrate into your project. Usually as simple as add a lib, include a header, and call one single line of code to initialize their system (in the case of crash report generation and gathering).
I work on a professional project. We have our own content and code build pipeline. It supports many platforms. I don't want anyone's god damn make bullshit. I want to drop some source code and library dependencies into my source tree that already exists and click my build button that already exists.
</rant>
[1] https://code.google.com/p/gperftools/
[+] [-] mysteriousllama|12 years ago|reply
Autoconf.. Well, I can't disagree. It's a hack built on top of a hack and should probably be rethought. Once autoconf is done generating Makefiles, make itself is generally trouble-free.
http://freecode.com/articles/stop-the-autoconf-insanity-why-...
[+] [-] DSMan195276|12 years ago|reply
Now to be fair, I am sacrificing some options here. The biggest is that autotools runs tests on the installation environment and tests the available functions and standards compliance, which in theory allows compiling the source on any system with has autotools for it, which is why it's so huge. You can't do that with standard Makefiles. I just stick close to the standard and avoid any non-standard extensions that I don't need.
[+] [-] ori_b|12 years ago|reply
I've written my own generic make library. The library itself is 95 lines of script, and handles all the dependency sniffing, library and binary building, and so on that I need in a reusable way.[1]
The makefiles themselves just list the sources and object files. They're typically only a few lines, like so[2]:
That's it.[1] http://git.eigenstate.org/ori/mc.git/tree/mk/c.mk
[2] http://git.eigenstate.org/ori/mc.git/tree/6/Makefile
[+] [-] CmdrKrool|12 years ago|reply
Yet when I download some random project's source code, I groan at any sophistry in the build process at all. I'm not interested in your build system - I'm interested in the application itself. Maybe I want to try and fix a bug, or have a half-baked idea for a new feature. I don't need dependency checking, incremental rebuilding, parallel building, and all that stuff you get from a fully operational build system at this point. I only need to build the project - once - as I decide whether to stick around. Sure, if I start working on it for serious, rebuilding over and over - then I'll bother to learn the native build system, and read any complicated scripts. Build systems are an optimization for active developers. They're a utility that is supposed to save time.
Of course, you're never going to get everyone in the world to agree on the same build system. We all have different desires and needs for what machines it should run on, how automated, how much general system administration it should wrap up, how abstractly the build should be described, etc. It's a bit like one's dot files or choice of text editor - my ideal build is tailored just for me but I wouldn't expect it to satisfy anyone else.
So now I wish that everyone who distributes software as source code would do this: include a shell script that builds the project. Just the list of commands, in the order that they are executed, that carries out a full build on the author's system. That's what it comes down to, in the end, isn't it? Your fancy build system should be able to log this out automatically. (Of course then you still include all the fancy build stuff as well, for those interested.)
Of course it's extremely unlikely that your shell script will work on my system without modification. There's probably machine-specific pathnames in there for a start. We might not even use the same shell! It's basically pseudocode. But if I'm faced with a straight list of imperative shell commands that doesn't work, and a program of some sort with its own idiosyncratic syntax and logic and a hundred-page manual and the requirement for me to install something - which also doesn't "just work" - well, as long as you know how to call your compilers and linkers and so on - which you should - the former is going to be easier to tweak into submission, to get that first successful build. After all, if I need much more than that I'll probably just recreate the build in my favourite system anyway.
[+] [-] vezzy-fnord|12 years ago|reply
[+] [-] voltagex_|12 years ago|reply
[+] [-] goalieca|12 years ago|reply
[+] [-] fnl|12 years ago|reply
[+] [-] dfc|12 years ago|reply
[+] [-] cmbaus|12 years ago|reply
[+] [-] fnl|12 years ago|reply
[deleted]
[+] [-] shadowmint|12 years ago|reply
1) mapping a source pattern to an output pattern
2) managing dependencies between rules
To be fair it's good at those, and often the sorts of things you can do with a rule are quite complex (being basically shell scripts).
However, the problem is that 1) it's an obscure DSL and 2) that it is really rubbish at doing more complicated things.
For example, grunt-contrib-clean lets you: delete any files that match a regex, not leave alone any files that match a different regex. Grunt also has a built in templating language that can be used to expand configuration files from submodule into local build scripts without copying the entire gruntfile. grunt-open launches a browser to a dev url cross platform. The list goes on and on and on.
Make is terribly terribly bad at complex tasks like this, that's the problem.
You can write a custom shell-script / ruby-script / python-script for these tasks, but why would you? Someone else already has. Dont repeat all the things every time with your own code.
If all you need to do is map .c to .o, or .scss to .css and .coffee to .js, use make, totally. It's good at that.
Otherwise, stay away.
[+] [-] gkya|12 years ago|reply
I'd rather rewrite a 4-LOC shell script in my new project, instead of depending on a build tool that depends on a non-standard, infant runtime itself, and also depends on third party libraries for deleting files.
[+] [-] rwmj|12 years ago|reply
You can break down complex tasks like "Has my software been delivered through the app store" down to goals that have to be fulfilled by carrying out (recursively) many layers of rules, like "did it pass human evaluation?".
It's a generalization of make / build systems.
[1] http://people.redhat.com/~rjones/goaljobs/
[+] [-] jdp|12 years ago|reply
[+] [-] microtonal|12 years ago|reply
make is great for a lot of build tasks.
[+] [-] pedrocr|12 years ago|reply
[1] https://github.com/apenwarr/redo
[+] [-] sigil|12 years ago|reply
One was a news recommendation engine. We pulled down and parsed RSS feeds, crawled every new link they referred to, crawled thumbnails for each page, identified and scraped out textual content from pages, ran search indexing on the content, ran NLP analysis, added them to a document corpus, ran classifiers and statistical models, etc.
Every step of the way took some input files and produced an output file. We used programs written in many different languages -- whatever was best for the job.
So a build system was the obvious way to structure all of this, and we needed a build system we could push pretty hard. Our first version used make and quickly ran into some limitations (essentially, we needed more control over the dependency graph than was possible with the static, declarative approach) so we turned to redo, which lets you write build scripts in the language of your choice.
One thing we needed almost immediately was more powerful pattern matching rules than make's % expansion. No problem: invent a pattern syntax and a special mode where every .do script simply announces what patterns it can handle. Collect patterns, iteratively match against what's actually in the filesystem, and then you've got the list of target files you can build. (This already differs from make, which wants you to either specify the targets explicitly up front as "goals," or enumerate their dependencies via a $(shell ...) expansion and then string transform them into a list of targets which are ALSO matched by some pattern rule somewhere...okay you get it, it's make, it's really disgusting.)
Another thing we needed was to say, here's a list of target files that appear in the dependency graph, give me them in topologically sorted order. This allowed us to "compact" datasets as they became fragmented, without disturbing things downstream from them in the dependency graph. Again, this was not difficult with redo once we had some basic infrastructure.
Now, was all of this maintainable, or was it just kind of insane? I think in the end it ended up somewhat insane, and most importantly, it was an unfamiliar kind of insane. The insanity that you encounter in traditional Makefiles is at least well understood. And treatable.
With redo, you can do almost anything with your build. You can sail the seven seas of your dependency graph. It's awesome. It's also terrifying, because there is very little to guide you, and you may very well be in uncharted waters.
But give it a shot anyway. YMMV.
[+] [-] catenate|12 years ago|reply
https://github.com/catenate/credo
(I don't like how makefiles have so many features that reimplement what you can do in the shell. I also don't care for big languages with build-tool DSLs--though you could say credo is a build-tool DSL for the shell, like git is a version-control DSL for the shell. With only language directives, no constructs.)
I wrote it in the Inferno shell to take advantage of some nice OS features and its cleaner shell language. One of these days I should port it to bash, so other people might use it.
[+] [-] gyepi|12 years ago|reply
https://github.com/gyepisam/redux
[+] [-] tod222|12 years ago|reply
[+] [-] rjd|12 years ago|reply
I've now tweaked my make files so they are almost unrecognisable from the twitter ones, it can run PHP, JS unit tests, fires up phantomJS and tests individual modules, can release minified for production, or not for debugging purposes. I can't stress how useful it is being able to just add a line and you get such powerful support.
I haven't had time to add git hooks yet, but thats the next stage, I plan to set the hooks to run tests and clamp down on poor quality code (I work with interns quite often... sometimes I cry for just average quality code coming in).
For a story of the production gains I've had. I moved the whole company CSS into a CSS preprocessor and cleaned up all the existing structures to fit the make file release procedure. It came back a thousand fold away a rebranding and I had everything done in two days. I was blown away with that alone, I've been involved in so many rebrands over the years that go on for months... not hours.
Do it, if you're bit unsure where to start check the twitter bootstrap build file and muck around with what they have done.
[+] [-] jeffbr13|12 years ago|reply
It seems most of the twitter-style web developer community who happily replaced `make` with `rake` when developing for Rails simply followed that with `grunt` when they moved to JavaScript.
[0]: https://github.com/twbs/bootstrap/commit/0d33455ef486d0cf06c...
[+] [-] notduncansmith|12 years ago|reply
I don't have anything against Make, I've never used it; judging by the code samples in the article, however, and contrasting that with my gulpfile (which, it's worth noting, didn't require me to learn a new language/DSL, just a dead-simple API), I feel much more empowered by gulp than make.
Somewhat relevant: I've also found gulp much easier to use and maintain than Grunt.
[+] [-] TheAceOfHearts|12 years ago|reply
With that being said, it's miles above Grunt.
[+] [-] antihero|12 years ago|reply
[+] [-] Offler|12 years ago|reply
[+] [-] crussmann|12 years ago|reply
[+] [-] tdicola|12 years ago|reply
[+] [-] McUsr|12 years ago|reply
I use make together with a markdown compiler, and the m4 preprocessor, to keep devdocs up to date, in one huge document, where everything is included, and the various sections as stand-alone docs. The markdown version of the section files is almost uncluttered from m4 and html. I link from any external doc to any other via links in the toc in the main doc index.html, to keep everything as simple as possible. It's sweet.
[+] [-] hunvreus|12 years ago|reply
It's a trade-off between simplicity and convenience.
[+] [-] criswell|12 years ago|reply
[+] [-] codezero|12 years ago|reply
[+] [-] dfc|12 years ago|reply
[+] [-] sigvef|12 years ago|reply
[+] [-] dubcanada|12 years ago|reply
[+] [-] mercurial|12 years ago|reply
[+] [-] Offler|12 years ago|reply
[+] [-] TheZenPsycho|12 years ago|reply
https://gist.github.com/Breton/7556390
This enables you to put a lot of your configuration in a json file, such as actual lists of files (which make is actually terrible at), have a decent way of getting json data out into make processes, and build files templated out of that json configuration, such as ssi files that set whether to load built JS or seperate script tags (also generated out of the same json config).
SO you get the declarative stuff in the declarative JSON format, and the stuff that make is good at: incremental builds, stays in make.
[+] [-] rmrfrmrf|12 years ago|reply
[+] [-] dfc|12 years ago|reply
[^1]: ESR lovefest: http://esr.ibiblio.org/?p=3089
[+] [-] hhm|12 years ago|reply
[+] [-] beagle3|12 years ago|reply
[+] [-] tomgruner|12 years ago|reply
[+] [-] Touche|12 years ago|reply
Next.
[+] [-] CoffeeDregs|12 years ago|reply
My hope is that we can just carry on with our dev work and Windows will slowly fade away as a platform for developing anything but SharePoint intranets...
[Note: I don't have a neckbeard, but I do run Debian testing.]
[+] [-] tomgruner|12 years ago|reply