Is it my impression or is Paul on some sort of a crusade to downplay the dev ergonomics of React and try to convince people that it's "slow"?
TodoMVC benchmarks have been done before: https://github.com/pygy/todomvc-perf-comparison . So sure, there's room for performance improvements in mainstream frameworks, and React is not the fastest thing in the universe, but come on. Maintaining a large project in vanilla js is largely equivalent to writing code in assembler when there are good C compilers: it's doable, and required in a handful of situations, but not really wise for 99% of real world projects.
Re: Tom's response: Something that he didn't mention (which is not surprising since he's an Ember dev) is that frameworks do sometimes detract from end user experience by imposing "opinionated" complexity and assumptions that might prevent devs from doing certain specific things and settling with suboptimal UX - the old adage of "if you want to deviate from the holy way(tm) you're on your own"
I'm kinda in the middle of the two opinions: it's definitely important to have access to the "metal" (both in terms of actually being able to code against low level APIs, and in terms of the amount of effort required to wade through framework abstractions in order to get there), but even using vanilla js, a complex app does need a "framework" (in the sense of having rules for where things should be and how they should interact with one another, and in the sense that any non-trivial app will have "library-level" plumbing). So, why not meet in the middle and use a lightweight framework that does 95% of things well enough to actually be used in non-trivial mobile apps[1] but that doesn't have high enough byte count to be bloated?
It seems like a zero sum, the problem is that any real world project that starts off as vanilla without a framework is almost assured to mutate into its own framework as the complexity of the project matures. Most of the pain points and pleasure points that frameworks bring to the table are slowly replicated with self built devices.
So the real choice here is not vanilla vs framework but do you want to use someone's framework or build your own.
It seems JS developers are recreating the debates that were already had, numerous times, long ago.
Yes using a JS Framework is going to be slower than writing and optimizing code by hand. In the same way that you can do faster code in assembler than using any languages or that using GTK on top of X is going to be slower than writing directly to the graphic card.
Yet we still uses them because it mean less bugs (because you are not reinventing the wheel every time), it encourage code reuse, it add structure, allow other dev to easily enter the code and open you to a vast library of modules. But most importantly it allow shipping features faster and that's where it impact the users.
Users care more about having an usable product than a fast one that does nothing (with some exceptions obviously). We can afford to optimize everything by hand and ship at the same time.
Sure, in some use-cases it will be too slow, but then you identify those particulars cases and you take your time to optimize them. Even by bypassing the framework if you need to.
But don't decide to not use a framework because the general use case will be a bit slower.
Or as Knuth put it decades ago "Premature optimization is the root of all evil"
The complete quote being, because it's relevant here:
> "We should forget about small efficiencies, say about 97% of the time: premature optimization is the root of all evil. Yet we should not pass up our opportunities in that critical 3%"
The interesting thing, for me, is that I recall the "golden age" of cross-platform GUI widgeting toolkits to be 1997-2001. AWT, Swing, SWT, wxWidgets, Qt, Gtk, Tk, MFC, XUL, and many others.
And this coincided almost perfectly with the rise of the web and the end of desktop GUI supremacy. The few big desktop successes in the 2000s (uTorrent, DropBox) often used no framework at all or had minimalist UIs where they used little-known corners of the OS but moved the heavy interface lifting to the web.
I'm wondering if this is part of the general pattern of things reaching perfection just as they become obsolete. Right when a technology reaches mainstream adoption, everyone has an opinion about how to do it "right", and right as it reaches mainstream adoption, all the major opportunities have been plucked clean.
Yes, but the problem is that you can have slower performance during loading, or you can have slower performance overall, and this is all down to the fact that the DOM does not have a native way of explicitly batching updates, and doing things like touching offset* properties can trigger layouts/repaints. ReactJS uses DOM tree diffing and merging, but that can also get you into trouble with the GC (this may be a solved issue, don't use ReactJS, so my bad if this isn't correct...).
Our product, Elevate Web Builder, uses an in-memory element representation with a DOM change management architecture that avoids all of these issues. However, because of this design, you have to use our product and our framework in order to benefit from it, because it sits as a virtual layer on top of the DOM.
What the DOM really needs, but probably won't get because there could be some serious side-effects from bad coding, is a set of simple reference-counted beginUpdate/endUpdate methods on each DOM element that isolate that portion of the DOM tree from repaints, but not layouts. That way, dimensional information is always immediately available, but painting is always handled as a single last step.
I'm meandering here, but the point is that improvements in the DOM management layer of the browsers could drastically cut the necessity of frameworks whose primary purpose is to work around deficiencies in the DOM. Fix the browsers, and you fix the load time. Cutting out the frameworks just ends up trading different kinds of pain...
The DOM already does dirty-checking on manipulation. Manipulating the DOM is cheap, it's manipulating the DOM while also getting dimension or style information that's expensive.
None of the major frameworks solve this problem. The closest may be React, which strongly discourages you from interacting with real DOM nodes or doing anything stateful, but even then, if you really want to touch that offsetWidth you can trivially destroy React's performance. Maybe the best alternative is just strict coding standards that say "Don't do that!"
I'd talked with a few Chrome PMs when I was working on Material Design for Google Search and discussed the possibility of some sort of batching API for layout-causing operations. They weren't against it in principle, but nobody could think of an API that was something we'd actually use as web developers. The problem is that thinking about your app in terms of "This expression causes layout, this expression dirties the DOM" is a big cognitive load, and making those two classes of expressions non-interleavable means you suddenly need to structure your whole app in different ways.
There'd also been talk of doing a subset of HTML that includes only the operations that can be done quickly, but to my knowledge, that never went anywhere. You run into problems with the whole huge installed base of the web; if you're going to throw everything out and start from scratch, why not just use a native mobile app, or even raw OpenGL ES commands?
I'm yet to see a non-trivial (100+ screens, 10+ devs over 10+ months) production-level app written in "vanilla JS" without any framework (popular open-source or home-grown "monster"). You need _something_ to structure your code and take care of repetitive / boring details.
Big picture this is a good reminder when the next big thing comes along <cough>React</cough> but he forgets where we came from. Prior to Angular, the major way of doing web-apps was with rendered templates that would replace whole swaths of html and so the "updating the DOM" step truly was more expensive than the Javascript driving it. This is the problem that Angular (and newer) frameworks/libraries sought to address.
But there's one more consideration...isomorphic rendering. If a framework can be rendered on the server-side then the time-to-interaction only matters if it's longer than it takes for the user to mentally process the page and take an action.
If a framework can be rendered on the server-side then the time-to-interaction only matters if it's longer than it takes for the user to mentally process the page and take an action.
Yes, but time-to-interaction can be long when connectivity is poor. Connectivity is frequently poor.
Turbolinks is a good alternative if you want to stay on the Rails path.
While Turbolinks 2 just replaced <body> with Turbolinks 3 you can replace partials without resorting to js.erb-templates.
This could be a very dumb question, since I don’t use any JS frameworks (on the client side), but… would it make sense to standardize (on the web platform) certain aspects of JS frameworks (their internals), so that the frameworks themselves could become leaner.
My thinking is, if the frameworks do similar things, i.e. provide similar functionality, maybe it would make sense to have browsers provide (standard) APIs for (some of) that functionality.
This is exactly what Web Components are: a standardized, minimal, HTML-compatible, component model that's understood by the browser. It solves DOM encapsulation, style scoping, element lifecycle, and DOM composition.
Frameworks and libraries can then build on top of that to provide templating, data-binding, additional lifecycle stages, and other helpers, which Polymer and X-Tags do.
This has happened, is happening, and will continue to happen. Huge swathes of the standardised additions to the web platform over the last fifteen years have been inspired or informed by features that were originally included in JS libraries and frameworks. Library and framework developers participate in standards bodies and give feedback to browser vendors (many of them work for browser vendors), and help to drive what goes into the web platform, based on what the implemented, and what they need for their software.
The problem is, software development is a moving target. The current feature-set of the web platform would let you easily build a state of the art application... in 2005. But things have moved on. People's ambitions and expectations for the functionality, responsiveness across devices, performance, aesthetic appeal, touch friendliness, accessibility and offline capability of their web apps has skyrocketed. And at the same time, we've learned more about the coding strategies and patterns that work (and don't work) for writing, large, ambitious web applications.
This is a good thing, because it means the web is moving forward, and so are we. However, it also means we're never going to reach a promised land where the web platform does everything we could ever need it to. There will always be new demands and new technology — retina screens, VR, fingerprint scanners, etc. — and the web platform will need to catch up in those areas. There will always be new frameworks, like React, that overturn existing best practices and experiment with new ways of doing things. This means will always be a need for libraries and frameworks at the cutting edge of web development, to pioneer the paths that can later be paved via standardisation.
We also need to be wary of premature standardisation. Web components have arguably suffered from this, although its proponents couldn't have predicted it when they began. For years, component-based encapsulation libraries/frameworks gained very little traction on the web, and so Google launched an effort to deliver components natively via a set of standardised APIs. However, halfway through their effort, Angular and React blew up, and suddenly everyone was writing components as directives and JSX components. This has caused some friction, because components as implemented by these frameworks (and thus coded by the majority of web devs) don't quite match the vision set out in web components. Web components will still deliver useful features, such as Shadow DOM, which will find use in these libraries, but had the web components effort started after Angular and React had appeared, its design would likely have looked somewhat different.
I read half of this article before I realized the author was talking exclusively about Javascript frameworks (when I saw the diagram with the frameworks he tested).
There are more frameworks in the world than Javascript frameworks, if you are only going to discuss a subset of them you should make this clear in the headline/introduction. A heading such as "The Cost of Javascript Frameworks" would be appropriate.
Developer ergonomics and agility serves the users' needs in a different way. If I can more quickly deliver value to my users, that's a good thing. It's not black and white though. These are trade-offs and need to be weighed for a given context/project.
To everyone defending JS frameworks: create a challenge that you believe requires a framework and see if someone can solve it more elegantly in vanilla JS.
Years ago when browser compatibility required 100s of workarounds for each browser and version, it made sense to use a framework to hide the complexity and keep up to date. But with modern browsers this just isn't the case.
- render a list of items, with a panel showing additional details for the selected item.
- update the view when changes from a backend data source arrive, either via some sort of polling mechanism or a persistent connection.
- allow the user to edit fields on the selected item.
- automatically save any pending changes when navigating away from the page.
AKA every CRUD app ever. You are going to need a framework (most commonly MV*) to deal with this in a maintainable way. Whether you use a vendor or create your own, you will end up with a framework.
When you say "vanilla JS", do you mean "just JS with other JS-only libs for special controls, etc.", or "just JS". Because, if the latter is the case, then the challenge is easy:
Virtual list controls that need to manage thousands of items/rows.
Very much this. Fast is very important for usability, but it doesn't matter if it doesn't work correctly. Time and time again in my consulting engagements it proves far easier to get it right first, and then refine and optimize later.
[+] [-] bretthopper|10 years ago|reply
[+] [-] lhorie|10 years ago|reply
Is it my impression or is Paul on some sort of a crusade to downplay the dev ergonomics of React and try to convince people that it's "slow"?
TodoMVC benchmarks have been done before: https://github.com/pygy/todomvc-perf-comparison . So sure, there's room for performance improvements in mainstream frameworks, and React is not the fastest thing in the universe, but come on. Maintaining a large project in vanilla js is largely equivalent to writing code in assembler when there are good C compilers: it's doable, and required in a handful of situations, but not really wise for 99% of real world projects.
Re: Tom's response: Something that he didn't mention (which is not surprising since he's an Ember dev) is that frameworks do sometimes detract from end user experience by imposing "opinionated" complexity and assumptions that might prevent devs from doing certain specific things and settling with suboptimal UX - the old adage of "if you want to deviate from the holy way(tm) you're on your own"
I'm kinda in the middle of the two opinions: it's definitely important to have access to the "metal" (both in terms of actually being able to code against low level APIs, and in terms of the amount of effort required to wade through framework abstractions in order to get there), but even using vanilla js, a complex app does need a "framework" (in the sense of having rules for where things should be and how they should interact with one another, and in the sense that any non-trivial app will have "library-level" plumbing). So, why not meet in the middle and use a lightweight framework that does 95% of things well enough to actually be used in non-trivial mobile apps[1] but that doesn't have high enough byte count to be bloated?
[1] http://en.lichess.org/mobile
[+] [-] untothebreach|10 years ago|reply
[+] [-] wwweston|10 years ago|reply
Close.
Front-end frameworks are one solution to the problem of managing the complexity involved in making an app.
After working with a few of them, I'm not really sure that they're the best ones.
[+] [-] yomism|10 years ago|reply
[+] [-] wbsgrepit|10 years ago|reply
So the real choice here is not vanilla vs framework but do you want to use someone's framework or build your own.
[+] [-] staz|10 years ago|reply
Yes using a JS Framework is going to be slower than writing and optimizing code by hand. In the same way that you can do faster code in assembler than using any languages or that using GTK on top of X is going to be slower than writing directly to the graphic card.
Yet we still uses them because it mean less bugs (because you are not reinventing the wheel every time), it encourage code reuse, it add structure, allow other dev to easily enter the code and open you to a vast library of modules. But most importantly it allow shipping features faster and that's where it impact the users.
Users care more about having an usable product than a fast one that does nothing (with some exceptions obviously). We can afford to optimize everything by hand and ship at the same time.
Sure, in some use-cases it will be too slow, but then you identify those particulars cases and you take your time to optimize them. Even by bypassing the framework if you need to.
But don't decide to not use a framework because the general use case will be a bit slower.
Or as Knuth put it decades ago "Premature optimization is the root of all evil"
The complete quote being, because it's relevant here:
> "We should forget about small efficiencies, say about 97% of the time: premature optimization is the root of all evil. Yet we should not pass up our opportunities in that critical 3%"
[+] [-] nostrademons|10 years ago|reply
And this coincided almost perfectly with the rise of the web and the end of desktop GUI supremacy. The few big desktop successes in the 2000s (uTorrent, DropBox) often used no framework at all or had minimalist UIs where they used little-known corners of the OS but moved the heavy interface lifting to the web.
I'm wondering if this is part of the general pattern of things reaching perfection just as they become obsolete. Right when a technology reaches mainstream adoption, everyone has an opinion about how to do it "right", and right as it reaches mainstream adoption, all the major opportunities have been plucked clean.
[+] [-] panic|10 years ago|reply
[+] [-] TimJYoung|10 years ago|reply
Our product, Elevate Web Builder, uses an in-memory element representation with a DOM change management architecture that avoids all of these issues. However, because of this design, you have to use our product and our framework in order to benefit from it, because it sits as a virtual layer on top of the DOM.
What the DOM really needs, but probably won't get because there could be some serious side-effects from bad coding, is a set of simple reference-counted beginUpdate/endUpdate methods on each DOM element that isolate that portion of the DOM tree from repaints, but not layouts. That way, dimensional information is always immediately available, but painting is always handled as a single last step.
I'm meandering here, but the point is that improvements in the DOM management layer of the browsers could drastically cut the necessity of frameworks whose primary purpose is to work around deficiencies in the DOM. Fix the browsers, and you fix the load time. Cutting out the frameworks just ends up trading different kinds of pain...
[+] [-] nostrademons|10 years ago|reply
None of the major frameworks solve this problem. The closest may be React, which strongly discourages you from interacting with real DOM nodes or doing anything stateful, but even then, if you really want to touch that offsetWidth you can trivially destroy React's performance. Maybe the best alternative is just strict coding standards that say "Don't do that!"
I'd talked with a few Chrome PMs when I was working on Material Design for Google Search and discussed the possibility of some sort of batching API for layout-causing operations. They weren't against it in principle, but nobody could think of an API that was something we'd actually use as web developers. The problem is that thinking about your app in terms of "This expression causes layout, this expression dirties the DOM" is a big cognitive load, and making those two classes of expressions non-interleavable means you suddenly need to structure your whole app in different ways.
There'd also been talk of doing a subset of HTML that includes only the operations that can be done quickly, but to my knowledge, that never went anywhere. You run into problems with the whole huge installed base of the web; if you're going to throw everything out and start from scratch, why not just use a native mobile app, or even raw OpenGL ES commands?
[+] [-] christophilus|10 years ago|reply
[+] [-] pkozlowski_os|10 years ago|reply
[+] [-] nostrademons|10 years ago|reply
[+] [-] explorigin|10 years ago|reply
But there's one more consideration...isomorphic rendering. If a framework can be rendered on the server-side then the time-to-interaction only matters if it's longer than it takes for the user to mentally process the page and take an action.
[+] [-] brlewis|10 years ago|reply
Yes, but time-to-interaction can be long when connectivity is poor. Connectivity is frequently poor.
[+] [-] seivan|10 years ago|reply
[+] [-] SimeVidas|10 years ago|reply
My thinking is, if the frameworks do similar things, i.e. provide similar functionality, maybe it would make sense to have browsers provide (standard) APIs for (some of) that functionality.
[+] [-] spankalee|10 years ago|reply
Frameworks and libraries can then build on top of that to provide templating, data-binding, additional lifecycle stages, and other helpers, which Polymer and X-Tags do.
[+] [-] stupidcar|10 years ago|reply
The problem is, software development is a moving target. The current feature-set of the web platform would let you easily build a state of the art application... in 2005. But things have moved on. People's ambitions and expectations for the functionality, responsiveness across devices, performance, aesthetic appeal, touch friendliness, accessibility and offline capability of their web apps has skyrocketed. And at the same time, we've learned more about the coding strategies and patterns that work (and don't work) for writing, large, ambitious web applications.
This is a good thing, because it means the web is moving forward, and so are we. However, it also means we're never going to reach a promised land where the web platform does everything we could ever need it to. There will always be new demands and new technology — retina screens, VR, fingerprint scanners, etc. — and the web platform will need to catch up in those areas. There will always be new frameworks, like React, that overturn existing best practices and experiment with new ways of doing things. This means will always be a need for libraries and frameworks at the cutting edge of web development, to pioneer the paths that can later be paved via standardisation.
We also need to be wary of premature standardisation. Web components have arguably suffered from this, although its proponents couldn't have predicted it when they began. For years, component-based encapsulation libraries/frameworks gained very little traction on the web, and so Google launched an effort to deliver components natively via a set of standardised APIs. However, halfway through their effort, Angular and React blew up, and suddenly everyone was writing components as directives and JSX components. This has caused some friction, because components as implemented by these frameworks (and thus coded by the majority of web devs) don't quite match the vision set out in web components. Web components will still deliver useful features, such as Shadow DOM, which will find use in these libraries, but had the web components effort started after Angular and React had appeared, its design would likely have looked somewhat different.
[+] [-] reillyse|10 years ago|reply
There are more frameworks in the world than Javascript frameworks, if you are only going to discuss a subset of them you should make this clear in the headline/introduction. A heading such as "The Cost of Javascript Frameworks" would be appropriate.
[+] [-] andyfleming|10 years ago|reply
[+] [-] unabridged|10 years ago|reply
Years ago when browser compatibility required 100s of workarounds for each browser and version, it made sense to use a framework to hide the complexity and keep up to date. But with modern browsers this just isn't the case.
[+] [-] grumblestumble|10 years ago|reply
- update the view when changes from a backend data source arrive, either via some sort of polling mechanism or a persistent connection.
- allow the user to edit fields on the selected item.
- automatically save any pending changes when navigating away from the page.
AKA every CRUD app ever. You are going to need a framework (most commonly MV*) to deal with this in a maintainable way. Whether you use a vendor or create your own, you will end up with a framework.
[+] [-] TimJYoung|10 years ago|reply
Virtual list controls that need to manage thousands of items/rows.
Hint: you're going to need a custom scroll bar.
[+] [-] lhorie|10 years ago|reply
- take object of arbitrary depth {a: 1, b: [1, 2, {c: 3}]} and send as querystring parameters to Rails/Node/PHP/whatever backend
- take two parallel ajax calls and run some code when they're both done
- SPA w/ parameterized urls
[+] [-] wangii|10 years ago|reply
[+] [-] nightski|10 years ago|reply
[+] [-] nostrademons|10 years ago|reply