I like to set up my node apps such that no matter what the build setup is, no matter what the dependencies are, you can execute `npm run dev` to get started.
Inside package.json, this will look something like:
The cool part about putting this in package.json is that it will install first so you get all the dependencies and devDependencies, and then npm sets up the PATH so that you can use all the binaries from all the dependencies you have installed. So you could replace `make` with `grunt` or `gulp` or whatever. And the `npm install` step only makes external requests if you're missing dependencies; when you already have everything you need, it quickly exits.
Then getting new developers up and running, even if the build setup changes is the same one and only step. In addition, if you pull new code and the dependencies change, you don't have to remember to run `npm install` because you're doing it every time the server starts.
On a project that I'm working I recently converted a ~260 line Rakefile into ~150 lines of gulp ... and our build is now almost instantaneous (took a few seconds to do with Ruby, mainly because we had to shell out to things like the less compiler). Oh, and we were also able to build our gulpfile.js in just a few hours after hearing about it for the first time.
I never went the Grunt route because I just can't get used to programming with massive, multi-tiered object literals. If you've ever seen a Gruntfile, you know what I mean. It's all configuration, very little real code.
With gulp, code trumps configuration. And the result is incredibly concise and fast. I would definitely encourage anyone who has been looking for a decent build system for JavaScript to give gulp some serious consideration. We were pleasantly surprised with the community support as well. There is already a wide array of community plugins for gulp that support many common use cases.
And again I'll remind people what happened in Java land:
ANT -> Ivy/Maven -> Gradle
I'll state it clearly: Grunt. is. Ant. It's a mess to follow a build script and a ritual to make it work in a real life project.
I expected a tool like gulp to come sweeping in and it did, I'm extremely happy about that and started migrating away from the horrible tooling that is Grunt.
I never understood how people can tolerate Grunt. Long live gulp and common sense!.
I agree with you that "Grunt is ant". But when I look at that code example it seems like "Gulp is ant", too. It's just less verbose. I think that Lineman is the Maven equivalent in the JS world.
The one thing that keeps me from using browserify or webpack or the likes, is dev time debugging. It's not very useful to find that line 13729 of script.js threw an exception.
Source maps may help, but many browsers don't support them, and I want to be able to debug everywhere. Plus, when the browserified js actually came from Coffeescript or TypeScript or the likes, I already have source maps in place. Can browserify source map my source maps?
Is there a solution to this? How do browserify fans do this?
I guess what I'd like is for browserify to have a mode that removes the require()s from my .js files and generates a bunch of script tags in the right order.
Well, source maps are usually the solution for me. And yes, browserify supports source-mapping all the way back to coffeescript files using browserify-middleware. In the off case that I need to debug something in a browser that doesn't support source maps, you can turn minifying off and usually it is pretty easy to recognise the code you're inspecting, even if it comes from coffeescript. I've never had a case where I've been completely out of luck (usually this case only happens in IE).
Unable to find the link/name at the moment but there is a method (it isn't vanilla source-map) for this which basically wraps the modules into self-executing functions and it shows up in Firefox and Chrome as distinct files. It was related to source-map but not
However, I have found Chrome and Firefox's support to be buggy.
I've been using make myself. The main barrier I keep running into is that it's actually quite challenging to :
1. Use make/bash to work with things like JSON, mustache, images, markdown, less, sass, uglifyjs, etc. etc..
2. Do so in a way that is portable to even other unixish machines.
3. Why doesn't make provide an easy way to input a BIG LIST of files into a command? The choices (I'm aware of) are to put them all on one line, work out some wildcard (which doesn't work on arbitrary lists of files you need in a particular order), or--- have backslash escaped line endings! yuck!
nodejs isn't available in the debian stable packages repo. The available mustache command line tools are pathetically bad at this task (I had to write my own). I can make it work beautifully on my machine, but as soon as it hits my co-workers machine, the build breaks because they haven't installed pandoc, or ripmime, or whatever other utility I had to use to get things done.
So, I don't know, maybe I'm doing things wrong. But I haven't got this to work particularly well yet.
Why even use Make? I just use shell scripts for automation.
The only case where you really need the incremental behavior of Make is C/C++ builds (and arguably it's increasingly inappropriate for this domain as well). For all other kinds of automation I just use shell scripts, since Make is mostly a horribly reinvented shell script dialect.
Well, to be fair, make and browserify do completely different jobs. You can't substitute browserify for make, nor vice versa.
But to be honest, if you see learning new things as wasting your attention on new fads, I don't think this stuff is for you. I really like trying new things that people have made and seeing what they can do. If that feels like a chore/hardship to you, you absolutely should just keep using make.
This is a tangential question, but how do front-end people feel about the constant change in the field?
I worked in the front-end and followed the trends for years and have found the changes difficult to follow. In 1997, the rage was VB and lots of cottage companies set up and advertising custom ActiveX widgets, on the web one had to learn ColdFusion and HTML/CSS. In early 2000's, VB6 was retired in favor of .net and a painful migration/learning-curve followed. Meanwhile, PHP was gaining traction so as a front-end person, one had to also start learning the LAMP stack in addition to asp and also CSS hacks to get different browsers to render mocks. Then around 2005ish is when AJAX/Web2.0 started gaining traction, one suddenly had to learn the burgeoning frameworks of the time, jQuery/Mootools/Prototype/Dojo/YUI/Sencha (at the time, no one knew which framework was going to win. I spent a lot of time on Dojo before moving to jQuery which started to gain the most traction); at the same time, web sockets still wasn't secure enough so there was also a lot of demand for Flex/Flash/Silverlight. Then around 2008-2009, when HTML5 started becoming more popular, Flex/Silverlight became obsolete; JS mobile frameworks such as PhoneGap and jQuery Mobile grew in favor but later in 2010-2011, they fell out of favor due to "responsive design" frameworks such as Bootstrap. Not to mention native mobile tech stack such as iOS and Android. In addition, around the same time, next-gen JS MVC built on top of jQuery have popped up such as Backbone.js, AngularJS and Ember.js and it's not certain who is going to win out this time in the year of 2014. On top of those, there are now JS AMD loaders (Require.js) and build/integration tools, Grunt that one needs to set up for a project which it seems may also be falling out of favor. Finally, new video/sound/web-socket standards revolving around HTML5 standards is demanding new learning bandwidth.
I'm frankly overwhelmed of learning and being exposed to new technologies. The physical draining feeling of learning new keywords to fulfill the same urges is as if I have watched 15 years of porn following from the grainy days of Jenna Jameson on VHS to the heady-days of Internet dial-up gonzo porn of the early 2000's that really explored anal (Gauge, Taylor Rain) to the streaming flash videos of Web 2.0 (Sasha Grey) to the now completely splintered and social-mediafied porno-world with all the mind-numbing categories under the sun (reality, high-art, webcam etc). I'm simply drained and spent.
There certainly has been changes in the field in back-end, from Java applets to Spring and Struts to now Scala and Clojure on JVM or transitioning the scripting language from Perl to Python, and adoption of Boost in C++. But I didn't have to re-learn old concepts and the changes were incremental instead of revolutionary; and the whole shift from declarative programming to functional languages is not new as you've learned Haskell/Lisp in undergrad anyways. Whereas what I had learned as a 9 year old on Turbo C doing DOS programming would still apply today, what I learned then for VB4 and HTML/Frontpage is now completely useless.
I'm scared for my brain as I get older as I may not have the time nor the energy to devote myself every year to relearn all of these new tech. I'm wondering for people who are above the age of 30, how do you deal with it?
I agree, I can't keep up, I just finished learning backbone.js and now I've
found out on HN that it's old news, and I should use ember.js, cross that, it
has opinions, I should use Meteor, no, AngularJS, no, Tower.js (on node.js),
and for html templates I need handlebars, no mustache, wait, DoT.js is better,
hang on, why do I need an HTML parser inside the browser? isn't that what the
browser for? so no HTML templates? ok, DOM snippets, fine, Web Components you
say? W3C are in the game too? you mean write REGULAR JavaScript like the Google
guys? yuck, oh, I just should write it with CofeeScript and it will look ok,
not Coffee? Coco? LiveScript? DART? GWT? ok, let me just go back to Ruby on
Rails, oh it doesn't scale? Grails? Groovy? Roo? too "Springy?" ok, what about
node.js? doesn't scale either?? but I can write client side, server side and
mongodb side code in the same language? (but does it have to be JavaScript?)
ok, what about PHP, you say it's not really thread safe? they lie?? ok, let me
go back to server coding, it's still Java right? no? Lisp? oh it's called
Clojure? well, it has a Bridge / protocol buffers / thrift implementation so we
can be language agnostic, so we can support our Haskell developers. Or just go
with Scala/Lift/Play it's the BEST framework (Foresquare use it, so it has to
be good). of course we won't do SOAP and will use only JSON RESTful services
cause it's only for banks and Walmart, and god forbid to use a SQL database it
will never scale
I've had it, I'm going to outsource this project... they will probably use a
wordpress template and copy paste jQuery to get me the same exact result
without the headache and in <del>half</del>quarter the price
I'm frankly overwhelmed of learning and being exposed to new technologies.
The physical draining feeling of learning new keywords to fulfill the same
urges is as if I have watched 15 years of porn following from the grainy
days of Jenna Jameson on VHS to the heady-days of Internet dial-up gonzo
porn of the early 2000's that really explored anal (Gauge, Taylor Rain) to
the streaming flash videos of Web 2.0 (Sasha Grey) to the now completely
splintered and social-mediafied porno-world with all the mind-numbing
categories under the sun (reality, high-art, webcam etc). I'm simply
drained and spent.
It drives me nuts. I spend probably 1/5 of my time doing front-end (but have been around through all of the generations you mention) and after re-org after re-org I've finally settled on a build system with a Makefile and browserify to package things up that isn't an enormous monstrosity of 1000 different node packages and bajillion-step build progress.
Frankly, I think that a large part the problems in front-end have to do with how hard it is to write maintainable JS w/ a proper separation of responsibilities, due to the way JS files are loaded/have no coherent module system (on the front-end that is). Because there is no standard module system (and yes I know about CommonJS and AMD loaders, but both have issues), to use a given component you oftentimes have to adapt an entire philosophy of package management that can lock you out of other packaging philosophies. In the end, we have millions of front-end programmers saying, "eh it seems like to much work to integrate this packaging philosophy, I'll just write my own duplicate copy/library." So basically projects silo themselves off and share little code until someone decides on yet another package philosophy (see: http://xkcd.com/927/).
People love to write build system after build system, in every field, a fetish I've never quite understood. Makefiles build some of the most widely used/complicated packages out there.
And as a final note, I'm really excited about emscripten in allowing front-end developers to move away from designing abstractions around JS/DOM in such a way that eventually we can stop relying on JS and rely on more the same primitives we use everywhere else in programming.
It's all just marginal convergence towards "best", and in real practice, most of this "progress" can and should be ignored. For every tool, wait it out until it's been around and still in active use/development/maintenance for at least 5 full years.
But always stay playing. Always try out the new things, because some of them may just scratch a burning itch.
Fear not age, because if you've been around long enough, and are still actively learning, all this new stuff starts looking very much like mere variations of old things.
Couple years ago when Google+ was the new kid on the block, I made a tiny userscript that hooked into their DOM and cleaned the UI up a bit. You'd think this was an easy task and you'd be right except it was quite a pain in the rear to maintain the extension. Google kept changing the classes and IDs, and moved the DOM around so frequently (sometimes within hours of the previous change) that my extension was constantly broken, and all my time was spent tweaking my code to keep pace with the changes propagating from an entire team of Googlers and their automated commit bots. It wasn't long before I gave up on the effort.
Following front-end trends today feels exactly like that experience; there's a whole host of prolific authors, even teams, coming up with new approaches for almost every nut and bolt in the stack. I think for the time-constrained it's best to wait for the wheat to rise above the chaff, even it means falling behind the curve a bit.
There's a few things going on here that combine to cause this mess.
First, task runners, like templating systems and module bundlers, are easy to write so there are lots of them. Grunt in particular doesn't bring anything to the table that bash scripts don't.
Second, most open-source projects don't make their value prop clear (I learned this the hard way first-hand and I'm still dealing with it) and most people don't have a good rubric to evaluate technologies so they fall back to crappy ones like gzipped size, number of dependencies, or the twitter account of who wrote it. Increasing the level of understanding of performance and how system complexity evolves over time is an important next step for the community to take.
For example I think the excitement around Gulp is legit because the tasks are composed as in-memory streams which is a scalable way to build a performant system. Browserify not so much, since it doesn't bring anything new to the table except maybe that it's so damned easy to use ("philosophy" does not count as bringing something to the table). Webpack, on the other hand is a whole different story since it accepts that static resource packaging (not just single-file JS modularization) is a problem that needs to be tackled holistically and necessitates a certain level of complexity.
I named specific projects not because I have any vested interest in them (I don't really use Gulp for anything) but because I wanted to show concrete examples of how to evaluate technologies on real merit.
Finally, the web frontend community has a huge problem with NIH (not-invented-here) syndrome which is encouraged by npm. For example, there are lots of copycat data binding systems that claim to be "lightweight" or "simple". They're usually written by people that don't know about all of the important edge cases which necessitate certain design decisions or library size. It goes the other way too -- a lot of people are building monolithic app frameworks without doing due diligence on existing systems to see if they can be reused.
If we can slow down and try to respect what others have done and acknowledge what we may not know, I think we can fix this problem.
I'm 33 and a frontender. To be honest, it doesn't bother me too much. I think there are a core set of skills that see you through all of the change. Things like: knowing how to work well in teams, working well with graphic designers, experience with how sites work in terms of UX, web service integration, good understanding of backend structures and HTTP, estimating on projects, dealing with clients, dealing with management ... these are the tricky things that make good developers great to have on projects I think. None of the new tooling, workflow and languages that come around are rocket science, and you can get up to speed on something in a few hours, especially if you have knowledge and experience of what came before and the problems the new tools are trying to solve. I still enjoy learning new things, I don't think we can expect the rate of change to slow down - if anything it may speed up. Its a young industry, nobody knows the right way to do things yet, let alone what the "end game" state of interactive information delivery to humans will look like!
As I've gotten older I've noticed I've become more pessimistic of changes in the field. Occasionally a really good idea comes along which sticks. A lot of the time though it feels like a new hit framework is cooked up every week, and experience has taught me that this weeks hip framework can quickly turn into last years boring support nightmare.
In general I think we're heading towards better things... you just have to watch out for the warts along the way.
> This is a tangential question, but how do front-end people feel about the constant change in the field?
I deliberately hang back on investing time in something unless it's immediately, drastically simpler than what's there now.
- My 47-line Gruntfile became a 23-line gulpfile, and I understood it better, so I learnt gulp.
- I don't see any huge advantage in using browserify, just syntactic difference, so I'm sticking to RequireJS right now.
- After reading about ractive and how simple it was (have an object, have a mustache template, you have bindings) I started using it in place of Angular.
Full-stack dev that came up through the front-end ranks here - I'm in my early 30s and have been doing this in some form or another for 17 years. I started with Perl and C CGI scripts, worked with Java Swing (on the desktop) for a while, had a brief foray into MFC, did a whole bunch of PHP in college, switched to Django/JQuery while working on my startup, and now use a whole bunch of Google-proprietary techniques along with the native browser APIs.
I've found that the best way to stay sane is to ignore the hype. After having worked with a dozen or so technologies, I can say that knowing your preferred toolkit well matters a whole lot more than which toolkit you know. Nowadays I usually tend to prefer whatever is native to the platform (vanilla JS for web, Java for Android, Objective C for iPhone), because it will perform better, there are fewer opportunities for bugs to creep in, and you can access more of the platform functionality without waiting for frameworks to catch up.
It was actually uTorrent (remember that?) that caused a major shift in my thinking: before that came out I was all like "Yeah, frameworks! Developer productivity! I want to squeeze every inch of convenience out of my computer!", but then here was this program that was a native Win32 app, no MFC or Qt or Python or anything, and it blew the pants off the competition because it didn't use a framework. I didn't want to deal with Win32 at the time, but Web 2.0 was just getting started, and within a couple years I was doing a startup with Javascript games, and I found that if I wanted acceptable framerates in a game I couldn't use JQuery and had to learn the native browser APIs, and the difference between them was incredibly stark (this was pre-V8, when no browser shipped with a JIT, and I was getting about 2 fps with JQuery and about 15 with native browser APIs). And once I'd used the native APIs for a bit, I found they weren't actually that bad; they were more verbose and less consistent, but not in a way that seriously slowed me down.
I think it does take a certain amount of security and confidence in one's own abilities to do this, because it definitely makes you uncool. I've been in Hacker News threads where people are like "No. What the hell are you smoking?" when I suggest that you might not need a JS framework. But the folks who matter to me respect my abilities enough to pay me generously for them, and not staying up-to-date with the latest and greatest gives me time to cross-train in other fields like compilers, machine-learning, data analysis, scalability, UX design, management, and other skills that I've frankly found significantly more useful than yet another MVC framework. If I need to use a framework I'll learn it on the fly; I've done that for several projects, and (after I learned my first half-dozen MVC & web frameworks) it's never taken me more than a week or two to become proficient in another one.
I've always enjoyed the change and the re-start of standardization efforts reminds me of the fun of bad old days without the massive browser incompatibility. For me the transitions weren't about being "forced" to do anything but rather a continuing quest to find something that sucks less. I've been on the w3c/mozilla bandwagon since '98 so I've avoided 90% of the plugin thrashing you mention. The DOM libraries are different interfaces over the same underlying API so they work the same. For the MVC libraries, I've been exploring the space since 2008 (I was writing docs to release my version of the Backbone library when Jeremy released Backbone and his code was better) so I don't see it as new and upcoming. Having a build seemed obvious when I started writing 10k+ sloc apps since I'm not going to put it in one source file and making 30 requests for js is terrible and I had a rake script I copy/pasted around for years before the node build systems showed up. AMD always seemed like a solution in search of a problem to me and I just concatenate everything.
For what it's worth, we're about to go through another generational shift in frontend tech. There are a few major developments in the pipeline: ES6 generators and Web Components. Generators allow for different async flow control patterns [1] which greatly changes the feel when writing JS. Component systems (Ember, Angular, Polymer, React) offer significantly improved state control and huge code savings. If you aren't already using one, you will be in the near future but it's still early enough that it's unclear which will come out on top. There's a set of emerging standards work around enabling self-contained Web Components (shadow DOM, template tag, scoped CSS) but these don't dictate how you structure your code so there's still room for conflict.
I am a full stack developer and spend a fair bit of time on the front end. Like you I feel a lot of this work will be made obsolete very soon but I am resigned to the fact because it has been going on like this for years.
I find the best strategy is to wait for wider adoption and hedge your bets. Gulp looks great but depending on the amount of time you can spend on it, it's best to wait for the ecosystem to mature (more plugins, more framework support, migration of existing Grunt plugins etc). I would give it another year or so because community migration can take time (backward compatibility, dependencies, endowment effect, etc).
The other thing that works for me is to adopt frameworks with a lower learning curve even it requires more manual plumbing. Plumbing is cheap and you can always refactor. Backbone JS was easy and I am looking forward to Riot JS because it has no learning curve as long as you have a good familiarity with writing robust JS.
What's the point of developing stuff in the first place? To make the world a better place, to learn things, to grow. Will that be accomplished so much better by switching from Perl to ASP to Ruby on Rails to Node.js to Go?
Look, there will always be a new tool|library|framework|language|paradigm that is cooler and better. If you pick the coolest language available today, you will be "so 2014" in a few years.
We need to be more pragmatic and use the tools that make our mission easier. Sometimes that mandates a complete rewrite in a new language|framework, but that is rare.
What the article calls "revolutionizing" is really not that big a deal. Maybe it makes things a bit easier, faster, prettier -- but it won't make or break your company.
The thinking might be different for behind the scene basic tools and more UI level libraries.
For this OP, the move from grunt to gulp for instance is a simplification of the build process. It requires some refactoring of the build process, but shouldn't impact the application code, the two libraries can coexist (not cooperate I guess, but at least you can set up both and use one or the other as you want), and can be tested with simple use cases first and expanded to the whole app afterwards.
The barrier to entry is low, it doesn't require much commitment and can be done on the side. I'd try to keep up as much as possible with new available tools, as long as they match the above criteria.
For libraries and frameworks closer to the UI and application structure, I have the feeling they generally need more time to pickup, learn the strengths weaknesses, deal with the quirks and bugs. Even with reasonable documentation, most of them seem to need at least a few dives into the source code to really get how they work and what they expect to be doing.
Trying Ember or Angular on a somewhat realish project takes enough time to make it a chore to try a few alternatives, I'd guess most devs would want to wait months or years to see which libraries die in infancy. I think for this space, trying more than one or two picks here and there a few months apart is just insanity, except if you really enjoy it or it's part of your job. As time goes by I feel the timespan I wait for something to see if t sticks goes longer. I remember a Jeff Atwood post [1], about how ruby is now mature enough to be taken seriously.
I think the same thought process can be applied to big enough frameworks.
Personally I'd tend to go for the libraries that are simpler or with the least 'magic' to avoid getting in situation where I invested weeks doing something and there's bug I don't know where it comes from and need to spend days on it because of the amount of abstraction going on. That's a way to mitigate risks when trying out random libraries.
Sorry I'm late, but I thought I should weigh in. This problem used to frustrate me but I realised that it just isn't worth worrying about. I wrote a big whiney article about it a year and a half ago: http://danielhough.co.uk/blog/i-cant-keep-up/
Interesting question, I feel that if you are good in a particular area (Say php / css / js / html) then you will find each new component / framework actually simplifies your life or makes it easier. The trouble is if you are more of a heavy js developer and are typically labelled as a web developer. Then you feel like you have to know about SASS (css) / HTML5 video standard / Scala and everything else related to the web field!
I think that as web development grows and matures, it will finally have multiple experts who need to work together and with each expert having no problem in catching up or using the latest paradigm in their area.
I would recommend you specialize / try and learn whatever you are good at.
One tiny benefit of Gulp - Grunt wasn't packaged on Debian because of JSHint which relied on JSLint which had the "The Software shall be used for Good, not Evil." licence clause http://en.wikipedia.org/wiki/JSLint
Gulp looks very interesting. Browserify vs. RequireJS seems more complex to me, though - it isn't difficult to use RequireJS in the CommonJS pattern. When I tried Browserify it was incredibly slow to build - but that might have been me setting it up wrong in Grunt...
I've been using a different strategy lately: no explicit build process at all, just use middleware (like browserify) to automatically compile/compress/concatenate, and an HTTP cache in production to store the results (either middleware, or external such as nginx/Varnish/CloudFlare/whatever).
This helps with the principle of minimizing divergence between development and production environments.
It's worked well for me for small projects, but maybe there are issues scaling up? Has anyone else tried this approach?
One interesting possibility is make use of the Gulp ecosystem. You could imagine "gulp-middleware" that lets you use any Gulp compatible module on the fly.
The main issue I have with Grunt and other frameworks/build tools is plugin authoring. Can't we just put this stuff in Make files and be done with it? I know, I know, Windows. But it seems like an awful lot of time is wasted building plugins that are just dumb wrappers for CLI options. Or worse, the tool's options get buried deep within the framework, ramping up the learning curve. As a recent example, I stopped using the Grails' resource plugin, and just compiled my static assets outside of Grails. The workflow became simpler because they didn't know about each other--I could use each tool to its full potential instead of relying on often incomplete plugins.
One fact that has not been mentioned yet is that gulp plugins (i.e. simple object streams) are very easy to unit test. (With grunt there still is not an easy way to unit test plugins.) Also by way of their nature as streams they tend to be small and composable, again making them easier to maintain.
There have been numerous posts stating that Gulp is faster than Grunt. However, the "faster" argument needs to be qualified with "under these conditions ____". Even better, here's my configuration. Here's the processing time of x and of y.
With JavaScript builds in particular, it's important to separate when the task running is used. At least in my workflow, I have two distinct times:
1. Development time
2. Build time
"Development time" is when the task runner is used during development, such as running livereload to update the app with saved changes to source files. Speed during development time is very important.
"Build time" is the time used when building a production package. With build time, if one task runner is 5 sec. vs. 10 sec. it makes little difference.
Is one task runner always faster?
What are the differences in performance?
And are these development time or build time differences?
I'm all for new tooling, but let's have measurements in place so that people can make intelligent decisions about when one vs the other is right/wrong for a given project.
As for configuration complexity, it would be great to see a large production configuration of Grunt and Gulp.
I've used Grunt for most projects in the last year, and just tried Gulp for my newest.
Gulp seems snappier overall to me for tasks that watch a source directory with coffeescript or sass files in it. Maybe it's grunt-watch vs gulp.watch().
I think Grunt touches the filesystem more than Gulp which could potentially be a bottleneck. The creator (IIRC) of Gulp made a slideshow[0] .
Here's an example Gruntfile[1] generated with Yeoman and generator-angular. And a Gulpfile[2] which does fewer things (no production build yet).
gulp is always going to be faster than grunt as long as grunt writes temporary files to disk. Using 3 plugins processing 20 files as an example: grunt will do 60 reads, 60 writes while gulp will do 20 reads, 20 writes
Grunt works just fine. Make is probably ok once you learn it. It is a shame that we as a community don't take time to create amazing tutorials for our older tools like we do with our newer ones.
I think the main issue ends up being that github has created a double edged sword in open source dev land. You have people creating projects just so they can earn stars and thus resume fodder. This ends up creating packages with no maintainers and multiple projects that do the same thing. I mean really, how many different string parsing npm libraries do you need?
[+] [-] AndyKelley|12 years ago|reply
Inside package.json, this will look something like:
The cool part about putting this in package.json is that it will install first so you get all the dependencies and devDependencies, and then npm sets up the PATH so that you can use all the binaries from all the dependencies you have installed. So you could replace `make` with `grunt` or `gulp` or whatever. And the `npm install` step only makes external requests if you're missing dependencies; when you already have everything you need, it quickly exits.Then getting new developers up and running, even if the build setup changes is the same one and only step. In addition, if you pull new code and the dependencies change, you don't have to remember to run `npm install` because you're doing it every time the server starts.
[+] [-] Raphael|12 years ago|reply
[+] [-] contrahax|12 years ago|reply
The code from the post is misleading/outdated.
[+] [-] mjackson|12 years ago|reply
I never went the Grunt route because I just can't get used to programming with massive, multi-tiered object literals. If you've ever seen a Gruntfile, you know what I mean. It's all configuration, very little real code.
With gulp, code trumps configuration. And the result is incredibly concise and fast. I would definitely encourage anyone who has been looking for a decent build system for JavaScript to give gulp some serious consideration. We were pleasantly surprised with the community support as well. There is already a wide array of community plugins for gulp that support many common use cases.
[+] [-] jondot|12 years ago|reply
ANT -> Ivy/Maven -> Gradle
I'll state it clearly: Grunt. is. Ant. It's a mess to follow a build script and a ritual to make it work in a real life project.
I expected a tool like gulp to come sweeping in and it did, I'm extremely happy about that and started migrating away from the horrible tooling that is Grunt.
I never understood how people can tolerate Grunt. Long live gulp and common sense!.
[+] [-] tieTYT|12 years ago|reply
[+] [-] nailer|12 years ago|reply
[+] [-] brokenparser|12 years ago|reply
[+] [-] skrebbel|12 years ago|reply
Source maps may help, but many browsers don't support them, and I want to be able to debug everywhere. Plus, when the browserified js actually came from Coffeescript or TypeScript or the likes, I already have source maps in place. Can browserify source map my source maps?
Is there a solution to this? How do browserify fans do this?
I guess what I'd like is for browserify to have a mode that removes the require()s from my .js files and generates a bunch of script tags in the right order.
[+] [-] jonpacker|12 years ago|reply
[+] [-] indolering|12 years ago|reply
However, I have found Chrome and Firefox's support to be buggy.
[+] [-] mehulkar|12 years ago|reply
[1] http://voices.washingtonpost.com/capitalweathergang/2008/11/...
[+] [-] emilis_info|12 years ago|reply
Speed? Check. Simpler syntax? Check. No need to "fix" code that isn't broken? Check. No need to waste my attention on every new fad? Check.
[+] [-] TheZenPsycho|12 years ago|reply
1. Use make/bash to work with things like JSON, mustache, images, markdown, less, sass, uglifyjs, etc. etc..
2. Do so in a way that is portable to even other unixish machines.
3. Why doesn't make provide an easy way to input a BIG LIST of files into a command? The choices (I'm aware of) are to put them all on one line, work out some wildcard (which doesn't work on arbitrary lists of files you need in a particular order), or--- have backslash escaped line endings! yuck!
nodejs isn't available in the debian stable packages repo. The available mustache command line tools are pathetically bad at this task (I had to write my own). I can make it work beautifully on my machine, but as soon as it hits my co-workers machine, the build breaks because they haven't installed pandoc, or ripmime, or whatever other utility I had to use to get things done.
So, I don't know, maybe I'm doing things wrong. But I haven't got this to work particularly well yet.
And uh.. windows. yep.
[+] [-] chubot|12 years ago|reply
The only case where you really need the incremental behavior of Make is C/C++ builds (and arguably it's increasingly inappropriate for this domain as well). For all other kinds of automation I just use shell scripts, since Make is mostly a horribly reinvented shell script dialect.
[+] [-] jonpacker|12 years ago|reply
But to be honest, if you see learning new things as wasting your attention on new fads, I don't think this stuff is for you. I really like trying new things that people have made and seeing what they can do. If that feels like a chore/hardship to you, you absolutely should just keep using make.
[+] [-] noname123|12 years ago|reply
I worked in the front-end and followed the trends for years and have found the changes difficult to follow. In 1997, the rage was VB and lots of cottage companies set up and advertising custom ActiveX widgets, on the web one had to learn ColdFusion and HTML/CSS. In early 2000's, VB6 was retired in favor of .net and a painful migration/learning-curve followed. Meanwhile, PHP was gaining traction so as a front-end person, one had to also start learning the LAMP stack in addition to asp and also CSS hacks to get different browsers to render mocks. Then around 2005ish is when AJAX/Web2.0 started gaining traction, one suddenly had to learn the burgeoning frameworks of the time, jQuery/Mootools/Prototype/Dojo/YUI/Sencha (at the time, no one knew which framework was going to win. I spent a lot of time on Dojo before moving to jQuery which started to gain the most traction); at the same time, web sockets still wasn't secure enough so there was also a lot of demand for Flex/Flash/Silverlight. Then around 2008-2009, when HTML5 started becoming more popular, Flex/Silverlight became obsolete; JS mobile frameworks such as PhoneGap and jQuery Mobile grew in favor but later in 2010-2011, they fell out of favor due to "responsive design" frameworks such as Bootstrap. Not to mention native mobile tech stack such as iOS and Android. In addition, around the same time, next-gen JS MVC built on top of jQuery have popped up such as Backbone.js, AngularJS and Ember.js and it's not certain who is going to win out this time in the year of 2014. On top of those, there are now JS AMD loaders (Require.js) and build/integration tools, Grunt that one needs to set up for a project which it seems may also be falling out of favor. Finally, new video/sound/web-socket standards revolving around HTML5 standards is demanding new learning bandwidth.
I'm frankly overwhelmed of learning and being exposed to new technologies. The physical draining feeling of learning new keywords to fulfill the same urges is as if I have watched 15 years of porn following from the grainy days of Jenna Jameson on VHS to the heady-days of Internet dial-up gonzo porn of the early 2000's that really explored anal (Gauge, Taylor Rain) to the streaming flash videos of Web 2.0 (Sasha Grey) to the now completely splintered and social-mediafied porno-world with all the mind-numbing categories under the sun (reality, high-art, webcam etc). I'm simply drained and spent.
There certainly has been changes in the field in back-end, from Java applets to Spring and Struts to now Scala and Clojure on JVM or transitioning the scripting language from Perl to Python, and adoption of Boost in C++. But I didn't have to re-learn old concepts and the changes were incremental instead of revolutionary; and the whole shift from declarative programming to functional languages is not new as you've learned Haskell/Lisp in undergrad anyways. Whereas what I had learned as a 9 year old on Turbo C doing DOS programming would still apply today, what I learned then for VB4 and HTML/Frontpage is now completely useless.
I'm scared for my brain as I get older as I may not have the time nor the energy to devote myself every year to relearn all of these new tech. I'm wondering for people who are above the age of 30, how do you deal with it?
[+] [-] Sheepshow|12 years ago|reply
I agree, I can't keep up, I just finished learning backbone.js and now I've found out on HN that it's old news, and I should use ember.js, cross that, it has opinions, I should use Meteor, no, AngularJS, no, Tower.js (on node.js), and for html templates I need handlebars, no mustache, wait, DoT.js is better, hang on, why do I need an HTML parser inside the browser? isn't that what the browser for? so no HTML templates? ok, DOM snippets, fine, Web Components you say? W3C are in the game too? you mean write REGULAR JavaScript like the Google guys? yuck, oh, I just should write it with CofeeScript and it will look ok, not Coffee? Coco? LiveScript? DART? GWT? ok, let me just go back to Ruby on Rails, oh it doesn't scale? Grails? Groovy? Roo? too "Springy?" ok, what about node.js? doesn't scale either?? but I can write client side, server side and mongodb side code in the same language? (but does it have to be JavaScript?) ok, what about PHP, you say it's not really thread safe? they lie?? ok, let me go back to server coding, it's still Java right? no? Lisp? oh it's called Clojure? well, it has a Bridge / protocol buffers / thrift implementation so we can be language agnostic, so we can support our Haskell developers. Or just go with Scala/Lift/Play it's the BEST framework (Foresquare use it, so it has to be good). of course we won't do SOAP and will use only JSON RESTful services cause it's only for banks and Walmart, and god forbid to use a SQL database it will never scale
I've had it, I'm going to outsource this project... they will probably use a wordpress template and copy paste jQuery to get me the same exact result without the headache and in <del>half</del>quarter the price
[+] [-] peter_l_downs|12 years ago|reply
[+] [-] newhouseb|12 years ago|reply
Frankly, I think that a large part the problems in front-end have to do with how hard it is to write maintainable JS w/ a proper separation of responsibilities, due to the way JS files are loaded/have no coherent module system (on the front-end that is). Because there is no standard module system (and yes I know about CommonJS and AMD loaders, but both have issues), to use a given component you oftentimes have to adapt an entire philosophy of package management that can lock you out of other packaging philosophies. In the end, we have millions of front-end programmers saying, "eh it seems like to much work to integrate this packaging philosophy, I'll just write my own duplicate copy/library." So basically projects silo themselves off and share little code until someone decides on yet another package philosophy (see: http://xkcd.com/927/).
People love to write build system after build system, in every field, a fetish I've never quite understood. Makefiles build some of the most widely used/complicated packages out there.
And as a final note, I'm really excited about emscripten in allowing front-end developers to move away from designing abstractions around JS/DOM in such a way that eventually we can stop relying on JS and rely on more the same primitives we use everywhere else in programming.
[+] [-] TheZenPsycho|12 years ago|reply
But always stay playing. Always try out the new things, because some of them may just scratch a burning itch.
Fear not age, because if you've been around long enough, and are still actively learning, all this new stuff starts looking very much like mere variations of old things.
[+] [-] takatin|12 years ago|reply
Couple years ago when Google+ was the new kid on the block, I made a tiny userscript that hooked into their DOM and cleaned the UI up a bit. You'd think this was an easy task and you'd be right except it was quite a pain in the rear to maintain the extension. Google kept changing the classes and IDs, and moved the DOM around so frequently (sometimes within hours of the previous change) that my extension was constantly broken, and all my time was spent tweaking my code to keep pace with the changes propagating from an entire team of Googlers and their automated commit bots. It wasn't long before I gave up on the effort.
Following front-end trends today feels exactly like that experience; there's a whole host of prolific authors, even teams, coming up with new approaches for almost every nut and bolt in the stack. I think for the time-constrained it's best to wait for the wheat to rise above the chaff, even it means falling behind the curve a bit.
[+] [-] peterhunt|12 years ago|reply
First, task runners, like templating systems and module bundlers, are easy to write so there are lots of them. Grunt in particular doesn't bring anything to the table that bash scripts don't.
Second, most open-source projects don't make their value prop clear (I learned this the hard way first-hand and I'm still dealing with it) and most people don't have a good rubric to evaluate technologies so they fall back to crappy ones like gzipped size, number of dependencies, or the twitter account of who wrote it. Increasing the level of understanding of performance and how system complexity evolves over time is an important next step for the community to take.
For example I think the excitement around Gulp is legit because the tasks are composed as in-memory streams which is a scalable way to build a performant system. Browserify not so much, since it doesn't bring anything new to the table except maybe that it's so damned easy to use ("philosophy" does not count as bringing something to the table). Webpack, on the other hand is a whole different story since it accepts that static resource packaging (not just single-file JS modularization) is a problem that needs to be tackled holistically and necessitates a certain level of complexity.
I named specific projects not because I have any vested interest in them (I don't really use Gulp for anything) but because I wanted to show concrete examples of how to evaluate technologies on real merit.
Finally, the web frontend community has a huge problem with NIH (not-invented-here) syndrome which is encouraged by npm. For example, there are lots of copycat data binding systems that claim to be "lightweight" or "simple". They're usually written by people that don't know about all of the important edge cases which necessitate certain design decisions or library size. It goes the other way too -- a lot of people are building monolithic app frameworks without doing due diligence on existing systems to see if they can be reused.
If we can slow down and try to respect what others have done and acknowledge what we may not know, I think we can fix this problem.
[+] [-] Wintamute|12 years ago|reply
[+] [-] jamesu|12 years ago|reply
In general I think we're heading towards better things... you just have to watch out for the warts along the way.
[+] [-] nailer|12 years ago|reply
I deliberately hang back on investing time in something unless it's immediately, drastically simpler than what's there now.
- My 47-line Gruntfile became a 23-line gulpfile, and I understood it better, so I learnt gulp.
- I don't see any huge advantage in using browserify, just syntactic difference, so I'm sticking to RequireJS right now.
- After reading about ractive and how simple it was (have an object, have a mustache template, you have bindings) I started using it in place of Angular.
[+] [-] nostrademons|12 years ago|reply
I've found that the best way to stay sane is to ignore the hype. After having worked with a dozen or so technologies, I can say that knowing your preferred toolkit well matters a whole lot more than which toolkit you know. Nowadays I usually tend to prefer whatever is native to the platform (vanilla JS for web, Java for Android, Objective C for iPhone), because it will perform better, there are fewer opportunities for bugs to creep in, and you can access more of the platform functionality without waiting for frameworks to catch up.
It was actually uTorrent (remember that?) that caused a major shift in my thinking: before that came out I was all like "Yeah, frameworks! Developer productivity! I want to squeeze every inch of convenience out of my computer!", but then here was this program that was a native Win32 app, no MFC or Qt or Python or anything, and it blew the pants off the competition because it didn't use a framework. I didn't want to deal with Win32 at the time, but Web 2.0 was just getting started, and within a couple years I was doing a startup with Javascript games, and I found that if I wanted acceptable framerates in a game I couldn't use JQuery and had to learn the native browser APIs, and the difference between them was incredibly stark (this was pre-V8, when no browser shipped with a JIT, and I was getting about 2 fps with JQuery and about 15 with native browser APIs). And once I'd used the native APIs for a bit, I found they weren't actually that bad; they were more verbose and less consistent, but not in a way that seriously slowed me down.
I think it does take a certain amount of security and confidence in one's own abilities to do this, because it definitely makes you uncool. I've been in Hacker News threads where people are like "No. What the hell are you smoking?" when I suggest that you might not need a JS framework. But the folks who matter to me respect my abilities enough to pay me generously for them, and not staying up-to-date with the latest and greatest gives me time to cross-train in other fields like compilers, machine-learning, data analysis, scalability, UX design, management, and other skills that I've frankly found significantly more useful than yet another MVC framework. If I need to use a framework I'll learn it on the fly; I've done that for several projects, and (after I learned my first half-dozen MVC & web frameworks) it's never taken me more than a week or two to become proficient in another one.
[+] [-] grayrest|12 years ago|reply
For what it's worth, we're about to go through another generational shift in frontend tech. There are a few major developments in the pipeline: ES6 generators and Web Components. Generators allow for different async flow control patterns [1] which greatly changes the feel when writing JS. Component systems (Ember, Angular, Polymer, React) offer significantly improved state control and huge code savings. If you aren't already using one, you will be in the near future but it's still early enough that it's unclear which will come out on top. There's a set of emerging standards work around enabling self-contained Web Components (shadow DOM, template tag, scoped CSS) but these don't dictate how you structure your code so there's still room for conflict.
[1] https://github.com/petkaantonov/bluebird/blob/master/API.md#...
[+] [-] aleem|12 years ago|reply
I find the best strategy is to wait for wider adoption and hedge your bets. Gulp looks great but depending on the amount of time you can spend on it, it's best to wait for the ecosystem to mature (more plugins, more framework support, migration of existing Grunt plugins etc). I would give it another year or so because community migration can take time (backward compatibility, dependencies, endowment effect, etc).
The other thing that works for me is to adopt frameworks with a lower learning curve even it requires more manual plumbing. Plumbing is cheap and you can always refactor. Backbone JS was easy and I am looking forward to Riot JS because it has no learning curve as long as you have a good familiarity with writing robust JS.
[+] [-] exodust|12 years ago|reply
[+] [-] cdaven|12 years ago|reply
Look, there will always be a new tool|library|framework|language|paradigm that is cooler and better. If you pick the coolest language available today, you will be "so 2014" in a few years.
We need to be more pragmatic and use the tools that make our mission easier. Sometimes that mandates a complete rewrite in a new language|framework, but that is rare.
What the article calls "revolutionizing" is really not that big a deal. Maybe it makes things a bit easier, faster, prettier -- but it won't make or break your company.
[+] [-] hrktb|12 years ago|reply
For this OP, the move from grunt to gulp for instance is a simplification of the build process. It requires some refactoring of the build process, but shouldn't impact the application code, the two libraries can coexist (not cooperate I guess, but at least you can set up both and use one or the other as you want), and can be tested with simple use cases first and expanded to the whole app afterwards.
The barrier to entry is low, it doesn't require much commitment and can be done on the side. I'd try to keep up as much as possible with new available tools, as long as they match the above criteria.
For libraries and frameworks closer to the UI and application structure, I have the feeling they generally need more time to pickup, learn the strengths weaknesses, deal with the quirks and bugs. Even with reasonable documentation, most of them seem to need at least a few dives into the source code to really get how they work and what they expect to be doing.
Trying Ember or Angular on a somewhat realish project takes enough time to make it a chore to try a few alternatives, I'd guess most devs would want to wait months or years to see which libraries die in infancy. I think for this space, trying more than one or two picks here and there a few months apart is just insanity, except if you really enjoy it or it's part of your job. As time goes by I feel the timespan I wait for something to see if t sticks goes longer. I remember a Jeff Atwood post [1], about how ruby is now mature enough to be taken seriously.
I think the same thought process can be applied to big enough frameworks.
Personally I'd tend to go for the libraries that are simpler or with the least 'magic' to avoid getting in situation where I invested weeks doing something and there's bug I don't know where it comes from and need to spend days on it because of the amount of abstraction going on. That's a way to mitigate risks when trying out random libraries.
[1] http://www.codinghorror.com/blog/2013/03/why-ruby.html
[+] [-] basicallydan|12 years ago|reply
[+] [-] wowfat|12 years ago|reply
I think that as web development grows and matures, it will finally have multiple experts who need to work together and with each expert having no problem in catching up or using the latest paradigm in their area.
I would recommend you specialize / try and learn whatever you are good at.
[+] [-] gcb0|12 years ago|reply
sigh, javascript people,
[+] [-] Maxious|12 years ago|reply
[+] [-] untog|12 years ago|reply
[+] [-] tlrobinson|12 years ago|reply
This helps with the principle of minimizing divergence between development and production environments.
It's worked well for me for small projects, but maybe there are issues scaling up? Has anyone else tried this approach?
One interesting possibility is make use of the Gulp ecosystem. You could imagine "gulp-middleware" that lets you use any Gulp compatible module on the fly.
[+] [-] ahallock|12 years ago|reply
[+] [-] pfrrp|12 years ago|reply
Allows for amd commonjs and almost any other format around. With gulp and grunt plugins!
[+] [-] indolering|12 years ago|reply
[+] [-] namuol|12 years ago|reply
[+] [-] JanJanJan|12 years ago|reply
Also, there is already an effort going on towards finding a generic API for node task runners at https://github.com/node-task/spec/wiki
[+] [-] akbar501|12 years ago|reply
With JavaScript builds in particular, it's important to separate when the task running is used. At least in my workflow, I have two distinct times:
1. Development time 2. Build time
"Development time" is when the task runner is used during development, such as running livereload to update the app with saved changes to source files. Speed during development time is very important.
"Build time" is the time used when building a production package. With build time, if one task runner is 5 sec. vs. 10 sec. it makes little difference.
Is one task runner always faster?
What are the differences in performance?
And are these development time or build time differences?
I'm all for new tooling, but let's have measurements in place so that people can make intelligent decisions about when one vs the other is right/wrong for a given project.
As for configuration complexity, it would be great to see a large production configuration of Grunt and Gulp.
[+] [-] lowboy|12 years ago|reply
Gulp seems snappier overall to me for tasks that watch a source directory with coffeescript or sass files in it. Maybe it's grunt-watch vs gulp.watch().
I think Grunt touches the filesystem more than Gulp which could potentially be a bottleneck. The creator (IIRC) of Gulp made a slideshow[0] .
Here's an example Gruntfile[1] generated with Yeoman and generator-angular. And a Gulpfile[2] which does fewer things (no production build yet).
[0]: http://slid.es/contra/gulp
[1]: https://github.com/jjt/dramsy/blob/master/client/Gruntfile.j...
[2]: https://github.com/jjt/LUXTRUBUK/blob/master/gulpfile.js
[+] [-] tomByrer|12 years ago|reply
https://github.com/osteele/grunt-update
https://github.com/goodeggs/grunt-skippy
https://github.com/aioutecism/grunt-diff
https://github.com/sindresorhus/gulp-changed
[+] [-] contrahax|12 years ago|reply
[+] [-] parris|12 years ago|reply
Grunt works just fine. Make is probably ok once you learn it. It is a shame that we as a community don't take time to create amazing tutorials for our older tools like we do with our newer ones.
I think the main issue ends up being that github has created a double edged sword in open source dev land. You have people creating projects just so they can earn stars and thus resume fodder. This ends up creating packages with no maintainers and multiple projects that do the same thing. I mean really, how many different string parsing npm libraries do you need?
[+] [-] cheapsteak|12 years ago|reply
Here's a rather ugly google cache'd version: http://robo.ghost.io/getting-started-with-gulp-2/
[+] [-] mrcactu5|12 years ago|reply
All I really get from this blog is that some influential guys approve of these two new JS libraries.
For someone who still builds website HTML, what are Grunt vs Gulp originally supposed to do? And why does Grunt do it better?
And a similar questiion for RequireJS / Browserify?