top | item 11017587

The Controversial State of JavaScript Tooling

156 points| lolptdr | 10 years ago |ponyfoo.com | reply

153 comments

order
[+] chipotle_coyote|10 years ago|reply
Something that's been concerning me about the current approach to web development in particular is the accumulation of unacknowledged -- and increasingly unmeasurable -- technical debt.

I'm not sure that's precisely the right phrase, but here's what I mean: we all (should) know about technical debt in our own projects, but we also know that every project accumulates technical debt. When we build our projects on top of other projects, our work now has the potential to be affected by the technical debt -- bugs, poor optimizations, whatever -- in the underlying projects.

Obviously, this has always been true to varying degrees. But we've reached a point where modern web applications are pulling in dozens of dependencies in both production and development toolchains, and increasingly those dependencies are themselves built on top of other dependencies.

I don't want to suggest we should be writing everything from scratch and constantly re-inventing wheels, but when even modest applications end up with hundreds of subdirectories in their local "node_modules" directory, it's hard not to wonder whether we're making things a little...fragile, even taking into account that many of those modules are related and form part of the same pseudo-modular project. Is this a completely ridiculous concern? (The answer may well be "yes," but I'd like to see a good argument why.)

[+] Touche|10 years ago|reply
More simply put, stability is not valued in the front-end world.

It's interesting to compare the dominance of Microsoft in the enterprise world where IT runs the show was established largely due to stability. IT is run by managers who can get fired if instability costs the company money.

In the front-end world managers can't keep up with the changing landscape and therefore cannot make stability based decisions (in terms of software stability). Instead, in my experience, they make decisions based on how easy it is to hire (as cheap as possible) developers.

So, as a proxy, popularity drives the front-end because developers who have the time to keep up with the latest trends (IE mostly young 20-somethings) because they're the ones who get hired.

[+] hassy|10 years ago|reply
> But we've reached a point where modern web applications are pulling in dozens of dependencies in both production and development toolchains, and increasingly those dependencies are themselves built on top of other dependencies.

Um, take ANY non-trivial application, written in any language, and then follow the chain of its dependencies all the way down to the kernel, and shriek in horror. JS is not unique here, the only unique part is perhaps how obvious npm makes it just how much stuff your code depends on.

[+] skewart|10 years ago|reply
As others have pointed out, the number of lines of code that the typical 2016 webapp depends on isn't all that different from other ecosystems. The dependency footprint just looks a lot bigger because its broken up into lots and lots of little modules.

I'd argue this actually makes things a little less fragile, because - at least theoretically - its easier to manage versions and do rollbacks of the different various pieces if something goes wrong. You can surgically switch the version of a dependency without necessarily changing a lot of other functionality.

Of course, things aren't always that simple. And lots of little modules might effectively mean less integration testing and higher chances of bugs. So, maybe it's a toss-up.

[+] Silhouette|10 years ago|reply
I think if anything you're being too kind here. The way dependencies are typically handled in the Node/NPM ecosystem is horrendous, and this has serious negative implications for the stability and longevity of almost any project built within that ecosystem today.

Unfortunately, modern web development is led, at least in publicity terms, by people who think phrases like "move fast and break things" and "living standard" are cool. The trouble is, what actually happens if you follow those people is that you break lots of things very quickly and you have no stable standards the help everyone do better.

[+] Outdoorsman|10 years ago|reply
Your point is one well worth making..."technical debt" is not an inept choice of terms in IMHO...

I've been in the trade for decades now...I'm more inclined to apply the term "technical baggage" to the feeling I often have these days when sitting down to solve a problem for a client...

Layers upon layers of dependencies to sort to do a proper job of putting systems back on sound footing...multiple patches to be examined, some to be discarded...with the clock ticking...non-tech managers love to hear you say, "OK, you're back up"...increasingly, that's what we settle for...

All to keep the wheels of commerce turning with as little down time as possible...

I have to admit that my discomfort is at least somewhat influenced by comparing the current environment to the environment of the early days...essentially stand-alone programs, elegantly written, performing the same basic function that now require five, ten...

Forgive the reminiscence...times are different...

[+] gotchange|10 years ago|reply
How's web development any different from other "branches" of software development when it comes to technical debt or deep dependency trees?!

It's been the norm in the field of software development since the early days to rely on other people's work for productivity reasons and many other objectives and consequently the trade-off of convenience with technical debt whether in implicit or explicit terms. I don't think that web development is more guilty of this "sin" than any other branch of the field.

[+] lr4444lr|10 years ago|reply
I think you make some good points, as do a number of your repliers. We're pulled in different directions by a number of business and technical priorities. What I see as a common denominator is a development life-cycle that expects increasingly nimble articulation of break-off projects and shorter iteration times into production, at least in SaaS. Concerns about code stability and optimization are not entirely lost by any means, but they don't seem to have the same priority when the business "has to" keep moving, and the tech "has to" keep evolving at this pace.
[+] lhnz|10 years ago|reply
The problem with hypermodularization is that it takes time to make a decision on what module to use for every task. If you have to to do this with 100s of modules, you're wasting a lot of valuable time.

Does anybody know a quick way of making the right decision?

NPM is like a crowded graveyard nowadays (or like .com domain names - where all of the good module names have been taken along time ago and now all the best practice modules have completely irrelevant names). There are thousands of buggy modules in which development has stalled, and it's easy to waste a lot of time trying to separate the wheat from the chaff. I personally sometimes have to open up 5+ Github repositories to check their last commit, number of contributors, interfaces, code quality, unanswered issues, etc. Only after doing so am I able to make a decision.

In terms of knowing what's cutting-edge practice it seems you have to watch Twitter a lot and be careful not to follow every single bad idea.

I can't imagine what it'll be like to search for a module to handle something as common as config in a couple of years. Even when you constrain yourself to something like '12 factor config' there are many different implementations.

Don't even get me started on the insane assembly required to get webpack-, babel-, cssmodules-, and postcss- to all work together.

The problem is only going to get worse.

[+] taurath|10 years ago|reply
Just like the difference between pre-internet post-internet days, the problem is no longer finding the information, its filtering the large amounts of available information. Same with libraries and modules - there's probably a module for a LOT of things. You eventually develop a good sense for what heuristics make a good module, and you develop in such a way that the cost of an accidental bad module is low.

You can certainly find full-fledged frameworks out there that do everything for you and make your decisions for you, but the cost of getting that wrong is far higher than a bad module.

[+] wwweston|10 years ago|reply
> I personally sometimes have to open up 5+ Github repositories to check their last commit, number of contributors, interfaces, code quality, unanswered issues, etc.

Only one of these, it seems to me, is likely to be consistently related to the quality of the module.

[+] erikpukinskis|10 years ago|reply
Basically, you look at the project's social presence.

That sounds kind of airy-fairy, but it's honestly how it works. You look at what the README says, how well it addresses questions, you look at the release history, see how maintainers interact with contributors, read a blog post or two to see how the maintainers are thinking.

And then you make a judgement call.

You don't have to get it right every time. That's the beauty of the modular approach. If you get it wrong, then you're smarter when you go to replace that piece of the infrastructure.

[+] EvanPlaice|10 years ago|reply
From a library developer perspective, the major issue I find with NPM is that it stores it's own copy of the source.

Old but useful libraries will often stagnate and get forked by users who still want/need to carry on development. Linking directly to a repo would make more sense but that's not how most people use NPM.

> In terms of knowing what's cutting-edge practice it seems you have to watch Twitter a lot and be careful not to follow every single bad idea.

A lot of this stems from the fact that todays solutions will likely become tomorrow's technical debt. It's not a popular opinion on HN but adjusting development to follow current/future web standards is insurance against future technical debt.

Webpack, for instance solves todays problems:

Modules:

Currently, there are 3 non-standards (AMD, UMD, CommonJS). To make 3rd party libs interoperable, all 3 standards have to be supported so Webpack handles the messy details. Which BTW is a huge improvement over not being able to use libs that don't support whatever non-standard you choose. As for future standards, Webpack is moving in a very positive direction by adding ES6 module support in v2.

Transpiling:

Transpiling as it's used today will likely become less relevant over time. In terms of Javascript, ES6 provides useful additions to make programming in vanilla JS much nicer. ES7 has the potential to shake things up even more in a really good way. For instance, decorators will make it much simpler and more straightforward to create higher order functions; which in turn will make it much easier to do functional-style programming in JS.

The next major shift will come with CSS extensions. Less, SASS, Stylus are the current common non-standard solutions to the difficulty of managing large CSS codebases. I'd expect that the web standards people will eventually cherry-pick the good parts from them the way they adopted the good parts of CoffeeScript in ES6. Unfortunately, those who heavily rely on Less, SASS and Stylus will either have to adapt or continue to use/support the tools of a dying standard when everybody else moves on.

Bundling:

Bundling as we know it today is an anti-pattern but also a necessary evil due to the limitations of HTTPv1. Warming intermediate caches helps but a warm local cache trumps all. Unfortunately, bundling is in such widespread use the chances of a user having a warm cache are essentially nil.

HTTPv2 will (hopefully) move influence developers to abandon bundling strategies, thereby improving cache reliability for all.

The next major shift we need to improve cache reliability is a widely-adopted versioning strategy that library devs use to mark packages for long-term support. It's insane that everybody relies on bleeding version of dependencies but everybody bundles everything so there's no measurable benefit to sticking with an older, more stable version of a dependency.

I would touch on the issues with the widespread adoption of functional-specific supersets of JS but -- considering the tastes of many HN users -- I really don't feel like being downvoted into oblivion.

> Does anybody know a quick way of making the right decision?

Try to see things from a long-term perspective. Stay cognizant of the nature of the hype cycle.

Some technologies really do have the potential to provide huge improvements in performance and usability. Some will eventually provide the improvements they promise but the first version won't be good enough.

Most tools -- no matter how useful they appear to be today -- will likely die or be replaced by something better in the future.

[+] jordanlev|10 years ago|reply
I think the javascript ecosystem has a fundamental difference from all other languages/communites that came before it: it is universal. Hence I think a lot of the debates raging about the right tools come down to different people using it for different purposes.

I doubt that everyone can consolidate on just one set of tools, because in javascript-land "everyone" means something different than other places. How can a tool that is good for a front-end designer who needs basic DOM manipulation also be the right tool for someone building an entire application in js and only using HTML as a delivery mechanism for their app bundle?

I wish people would recognize in these discussions that their use-case might be different from others, and instead of talking about "the best tools", instead talk about "the best tools for this class of applications".

So hopefully the toolset could be consolidated down to one clear choice for each class of usage. Then the biggest decision to make is deciding which type of application it is you're building.

[+] davedx|10 years ago|reply
This is definitely part of it. JavaScript tooling encompasses back end, front end (progressive enhancement inside server generated web pages), front end (large single page apps), and a bunch of language flavours (ES5, ES6, ES7, TypeScript, JSX, to name a few). So it's understandable the tooling ecosystem is large and diverse, and that plumbing it all together can result in premature hair loss.

We're using ES5 with AngularJS at work, and it's like a breath of fresh air ;)

[+] hodwik|10 years ago|reply
The web community is flooded in negativity because it was swamped by kids with overly idealistic, unrealistic, and heroic ideas about what the web was going to be post-Facebook.

Those people then realized that the web is, like all things, both real and imperfect. So now they're upset. This is all part of growing up.

People who have been involved with the web for a while are not in any way more jaded then before. They already witnessed PHP, PERL, Java Applets, Flash, the browser wars, and so on.

[+] bshimmin|10 years ago|reply
I don't think I agree. I've been developing stuff for the web since the mid-90s, when TABLEs were just about starting to be a thing and Perl was definitely the preferred choice on the server. There has been plenty of excitement along the way, undeniably, but it feels like HTML5 took forever to get to where Flash was, JavaScript on the server still underwhelms me, and it seems almost daily to get more complex and full of enterprise beans, browsers are far from equal, the CSS3 spec isn't finished, developing for the mobile web is often a pretty wretched experience... and my clients still ask me to make their logos bigger and complain that important stuff is below the fold. Am I jaded? You bet.

But hey, at least we can centre things vertically in CSS with Flexbox now (apart from the dubious browser support, of course).

[+] swk2015|10 years ago|reply
If this has a grain of truth in it, the 'older crowd' need to lead by example, stop shaming 'kids' for their age, and prove why their old products are better.
[+] baldfat|10 years ago|reply
Creating something is difficult and criticizing is easy. Such that showing oneself to be capable by creating is MUCH harder then showing oneself to be capable by criticism.
[+] ktRolster|10 years ago|reply
> They already witnessed PHP, PERL, Java Applets, Flash, the browser wars, and so on.

It's frustrating that the situation hasn't improved. I thought by now the web would have a clean, easy system to work with. At the rate things are going, I don't have much hope before 2030.

[+] s986s|10 years ago|reply
This is different though. Javascript is something that all of us are, in one way or another, exposed to. Giant companies building an ecommerce platform, startups making fun and innovative spas, designers adding a bit of animation and adaptivity to a theme, bare metsl devs bringing javascript to robotics and operating systems, severside crud applications and websocket handling, C++ devs that are interesting in bringing the best to such a widely used language, mobile phones frameworks and compilers, microsoft, google, apple, unity, mozilla.

Javascript is the real deal. What makes it crazier is that 10 year olds can do a few tutorials on html5 rocks and feel confident in their abilities (in a good way). Javascript is not just a crappy language, there a socio-political aspect that has probably never been seen before

[+] alextgordon|10 years ago|reply
> People take libraries like lodash – or jQuery, as we analyzed earlier – and insert the whole thing into their codebases. If a simple bundler plugin could deal with getting rid of everything in lodash they aren’t using, footprint is one less thing we’d have to worry about.

If you use Google CDN, why does it matter how big jQuery is? If N people use their own "smaller" M-byte copy of jQuery, browsers will have to download M*N bytes, as opposed to 0 bytes if you use the cached full version. A profound waste of bandwidth.

My advice: Use Google CDN, for less common stuff use cdnjs. Don't adulterate libraries!

[+] justaaron|10 years ago|reply
One should consider the possibility that the entire web development industry is a giant busy-work factory making half-baked kluges to a fundamental problem that will never be papered over: http is a stateless protocol for sending hypertext. markup. meta-data and context for resources in a folder somewhere. It's a file-system thing. A web browser is a declarative-markup-tree rendering-engine with scriptable nodes, and the language chosen for scripting is an accident of history. Using http and web browsers in ways they never were intended to be used is possible, albeit painful. Now that we have virtual DOM's and isomorphic platforms like clojure/clojurescript and we compile to JS, now that we have JS on the server, now that we have our head so thoroughly up our ass that we forgot the point, now we can consider the circle of ridiculous nonsense complete...

The world wide web took off, and we have to live with it's technical debt, or...

The solution is simple, bold, and risky: 1) pick a port. (Nowadays that's even a joke. we tunnel everything over port 80.) 2) pick a protocol with some future (hey let's just tack websockets on as an upgrade, gradually get browser support, etc) 3) keep moving...

I am not ultra impressed with the web of 2016... The web of 1996 was way cooler. I want a VRML3 rendering engine for a browser.

[+] justaaron|10 years ago|reply
Oh, don't even get me started with build tools. Nowadays we have build tools to build build tools.

I'm sorry, but when FRONT-END DEV considers build tools standard we are LOST lost lost...

How many JS libraries and CSS scripts do we really need to embed in a page? How many of those functions or classes are even being used in that page? Why do I have to scroll to view source on a page with mostly text and a few colored boxes?

Hand-coded html and css is not that hard folks...

it's just the habits, the frameworks, etc...

1/4 of the web is powered by Wordpress wtf!?

[+] stcredzero|10 years ago|reply
> Tree-shaking is a game breaker

To my knowledge, this has never, ever, worked well enough in a dynamic environment. Smalltalkers have spent over 3 decades trying to get this to work. What became the Smalltalk industry standard? Some form of code loading, often based on source code management.

Anyone who is doing tooling/library work in a dynamic environment needs to delve into the history of Smalltalk and ask if it was already tried and what the problems were. Chances are, it was already tried, and that there's useful experiential data there.

[+] TimJYoung|10 years ago|reply
Speaking of controversial: I think things are going to keep being bad until developers realize that the problem isn't technical, but rather economic. Back before the current "give it all away for free" trend became a thing, commercialization of software was a given and allowed "winners" to emerge from the chaos. The merits of the "winners" isn't important here, it's the stability that comes with it. As long as everything is given away for free and there are no barriers to entry, you'll keep ending up with chaos and unmanageable churn whereby your job as a software developer has now morphed into a software tester/evaluator for every single piece of functionality that you need and don't want to write yourself.

When you have to actually put your money on the line and have an actual business presence on the web, it's a completely different mindset from "I'm going to write this small library and pop it up on GitHub for free, so who cares if there's documentation or if the software even works as described". The fact that there are developers that are professional and thorough and still give away their software is a minor miracle. But, it's not wise to count on the charity of such developers for the long-term because it isn't realistic. As more and more one-offs are created, it becomes much harder to distinguish one's software from the rest of the pack, so developers will simply not even attempt to do so. The Apple store is a perfect example of this problem.

[+] atemerev|10 years ago|reply
Babel got it wrong.

Hypermodularisation is a good thing in user-facing code. Here we should thrive to shave off every last byte.

But Babel is a tool for developers. We don't need configuration explosion and endless plugins. We need all batteries included. The very purpose of Babel is "hey, I want to write hip code like the rest of cool kids of the block; now, let it run everywhere". Who needs to configure that?

[+] LewisJEllis|10 years ago|reply
I agree that Babel doesn't get every single thing right, in much the same way that everything manages to get something wrong, but I don't think you're giving credit to the benefits of its modularity.

Another "purpose" of babel is "hey, I want to implement an upcoming ECMAScript feature in an extensible and relatively self-contained way so I don't have to go digging deep inside someone's codebase."

Or, it's also "hey, I only need these 3 features and I want my build times to stay as low as possible."

For the use case you describe, there are presets provided to make that quite simple.

[+] WorldMaker|10 years ago|reply
Except that Babel has always ran in the browser, too and "hip code" is a constantly shifting target: ES2016 was just finalized (exponent operator, Array.prototype.includes), and much of ES2015 is already implemented in current browsers, but certainly not all of it and the sets implemented in each browser is different.

Hypermodularization should mean that Babel stays relevant with new standards (this month's ES2016 announcement, plus we already have a good idea of what should be in ES2017 with the new "continuous deployment" approach to the ES standards) and doesn't get bogged down in legacy code as browsers adopt the standards.

[+] sotojuan|10 years ago|reply
It's interesting how none of the JS/tooling fatigue discussions mention Ember, which has one public-facing tool.
[+] anonyfox|10 years ago|reply
So true. Ember is the antidote right now for the frontend land.

- MVC (Backbone's big thing) - Two way binding (Angular's big thing) - Components (React's big thing) - SSR like the others now (FastBoot) - High performance with the Glimmer Engine (should beat react in theory) - Books. - dedicated package repository (emberaddons) - actually community driven - stable + mature (+ seamless upgrade paths) - and many more advantages

plus, as you said, the ember-cli tool, that gets you up to speed fast. The tooling makes the final success, just have a look at go... the language appears fairly weak/trivial, but the tooling is excellent.

I could freak out everytime I have to set up an react project, weighing everyones opinions about every little package for every usecase, IMO react shifts the complexity from the UI to the tooling. Ember is just perfect and the only real option I could recommend right now, and I'm just sick of thousands of "me too!" approaches for everything, this extreme diversification cripples actual progress to a halt.

[+] aikah|10 years ago|reply
i wouldn't use Ember precisely because it strongly suggests using a command line tool specifically for the framework. I personally ended up with Typescript as a language + Bower for package management. When it comes to frameworks it is either the framework adapts my toolset or I don't use it. I refuse to compromise and embrace yet another asset pipeline or build tool, no matter how "good" the framework is supposed to be. Typescript support both decorators and JSX, so I can tolerate working with either Angular 2 and React. And since build steps are mandatory in Front End development I, at least, can take advantage of static typing.
[+] auvrw|10 years ago|reply
tried to use ember-cli w/o enforcing some certain opinions (e.g. directory structure) on a project and reached out to the community via slack, but it hasn't really taken so far.

maybe Goya was a bad choice? someone on the slack used the term "consume" for what i was trying to do w/ ember-cli, and..

https://github.com/ransomw/ember-cli-programmatic-usage

[+] pbowyer|10 years ago|reply
A fantastic post. If only to find someone else talking about the downsides of hypermodularization.

> Unfortunately, the spirit and praise of the web in Remy’s post isn’t shared by many of these articles. To the contrary, the web development community is flooding in skepticism, negativity, and pessimism.

I started building on the web a year or two after Remy. I believe one reason for the lack of praise is the reason people came to the industry. He & I came because we loved the web, we loved to see what could be done.

Today, the web's an entirely different commercial being. People do this as a job (as do I, which I'm very thankful for), they have less time to see what can be done (constructive contributions take effort) and time is money, so let's develop fast and move on, slagging off everything as we go.

I've been jaded, I'm guilty of being negative about everything.

But I do see hope.

[+] kasey_junk|10 years ago|reply
I heard similar opinions in 1998. I think it probably has less to do with the state of the industry & more to do with the state of the career of the observer.
[+] gsmethells|10 years ago|reply
Hypermodularization sounds like the latest euphemism for Dependency Hell (tm).
[+] swanson|10 years ago|reply
Are there any documented cases of a business failing because their JS payload was too large? I get that smaller code is easier to understand/work with, but I've never been able to internalize the desire for small payload -- just doesn't seem like it ever matters outside of philosophic reasons for saving user's bandwidth (especially if it comes with a steep tooling cost).

It strikes me as a technical pursuit in search of a problem -- but certainly willing to be convinced otherwise.

[+] arohner|10 years ago|reply
Google found a 0.5 second delay caused a 20% decrease in repeat traffic, that persisted after the delay went away (http://glinden.blogspot.com/2006/11/marissa-mayer-at-web-20....)

Amazon found every 100ms slower the site loaded, they lost 1% in revenue: http://www.gduchamp.com/media/StanfordDataMining.2006-11-28....

Walmart.com found a very large change in conversion rate based on page load times: ( http://www.slideshare.net/devonauerswald/walmart-pagespeedsl... slide 37)

My own customers have seen 4x different conversion rate, when grouped by page load time. (i.e. same slide as walmart page 37 above, where the peak is 4x higher than the lowest performing group).

Your business probably won't fail because it's slow, but it will certainly make less money because it's slow.

[+] swang|10 years ago|reply
As a user have you never clicked the back button because the page was taking too long to load?

Businesses won't fail outright, but they will probably lose users/customers who decide not to spend time on the site because of load speed.

[+] szines|10 years ago|reply
I’ve been using ES6 modules in Ember.js development for two years now. Using Ember CLI is simplified down the dev process extreamly. The whole dev app development is so simple and fun with Ember. I don’t understand why people cry about it, the solution is exist, they should just use it. Thats all.
[+] pspeter3|10 years ago|reply
At Asana we now just Bazel [http://www.bazel.io] and TypeScript for our front end code. It made the tooling dramatically simpler.
[+] napperjabber|10 years ago|reply
One of the major differences between a jr and a sr IMO, is their ability to tell you what each one of those dependencies solves. If they don't know the sub-dependencies by heart, they didn't read the code or at least look it up outside of understanding the API.

Than you have the maintenance developer, and everyone loves 'em. He just figures things out and helps you improve your code while you scream at him for not knowing the 'bigger picture'. Fun times.

[+] haberman|10 years ago|reply
In my experience some aspects of the JavaScript experience are great, and others are terrible.

npm is great. It gives you an easy way of specifying your dependencies in your source tree, and makes it extremely easy for people who check out your repo to obtain them. People can run "npm install" and now they have a copy of all your dependencies in "./node_modules". It composes nicely too: "npm install" also pulls the dependencies of your dependencies.

Babel is great. Sure, I've heard some complaints lately about their latest changes, but so far this hasn't affected me as a user. Babel for me means that I get to write using the most modern ES6/ES7 features and then compile to ES5 for compatibility. For me it works great and mostly hassle free.

The frameworks themselves are great. Not perfect sure, but there are lots of great ideas floating around in React, Angular, d3, moment.js, etc. and the packages built on top of them. Whatever you want to do there is a library out there that someone has put a lot of love into. There is a lot of choice -- yes, maybe sometimes a little bit too much, but I'd rather have that than too little.

Flow is great (and I hear TypeScript is too, and getting better). I can't tell you how nice it is to be able to declare static types when you want to and hear about type errors at compile time. Maybe not everybody's cup of tea, but I love it.

The build systems, minifiers, test runners, etc. are terrible. By far the worst part of JS development for me is figuring out how to glue it all together. When I try to figure it out it's like entering an infinitely complex maze where none of the passages actually lead anywhere.

--

For example, let's say you want to run some Jasmine tests under PhantomJS. Jasmine is a popular unit testing framework and PhantomJS is a popular headless browser, scriptable using JavaScript. Both very cool technologies, but how can you use them together? This is a real example: it's something I really wanted to do, but in the end I literally could not figure out how and gave up.

Phantom JS claims that it supports Jasmine (http://phantomjs.org/headless-testing.html) though it gives several options for test runners: Chutzpah, grunt-contrib-jasmine, guard-jasmine, phantom-jasmine. Time to enter the maze!

Chutzpah looks promising (http://mmanela.github.io/chutzpah/) -- it says it lets you run tests under a command line. It says it "supports the QUnit, Jasmine and Mocha testing frameworks" and you can get it by using "nuget or chocolatey". Dig a little deeper and it starts to become clear that this is a very Windows-centric tool -- nuget says it requires Visual Studio and chocolatey is Windows-only. Our maze has run into a dead-end.

Moving on to grunt-contrib-jasmine. I don't really want to use this because I'm currently using Gulp (Grunt's competitor), but let's check it out. We end up at this page (https://github.com/gruntjs/grunt-contrib-jasmine). This page is sort of a quintessential "JavaScript maze". It contains a lot of under-explained jargon and links to other plugins. And it gives me no idea how to do basic things like "include all my node_modules" (maybe I should list each ./node_module/foo dir explicitly under "vendor"?)

Moving on to guard-jasmine, I end up at https://github.com/guard/guard-jasmine, and it's clear now that I've entered a Ruby neighborhood of the maze: everything is talking about the "Rails asset pipeline", adding Guard into "your Gemfile" (I don't have a Gemfile!!). I really don't want to introduce a Ruby dependency into my build just for the privilege of gluing two JavaScript technologies together (Jasmine and PhantomJS).

The final option in the list was phantom-jasmine, bringing us here: https://github.com/jcarver989/phantom-jasmine. It's been a while so I don't remember everything I went through trying to make this work. But I was ultimately unsuccessful.

[+] mdavidn|10 years ago|reply
Interesting how the industry has come full circle. Static linking and dead code elimination is a problem we solved in the 1970s for compiled languages. Has the time come to adopt a proper linker for the web?

Google's Closure Compiler has supported dead code elimination [1] since 2009, but the feature imposes some unpalatable restrictions. It never supported popular libraries like jQuery. The process is also rather slow, as Closure Compiler must analyze all code in the bundle at once.

[1]: https://developers.google.com/closure/compiler/docs/compilat...

[+] virmundi|10 years ago|reply
An interesting thing is how that applies to transpiled languages like ClojureScript. There are a lot of efficiencies gained by using ClojureScript with Google Closure. Even react is faster with Reagent.
[+] draw_down|10 years ago|reply
I thought this was a decent survey of where we're at, but I'm not sure what to take away. I don't care for "opinionated" as a description of tools or code, nor its opposite. It doesn't really mean anything, in my... opinion.