I guess I'll never really understand articles like this, or I just have poor reading comprehension. I don't see what the point of this type of thing is, it feels like a rant without any particular recommendations. It can be summed up as "don't use technology without understanding it, and don't blindly follow the crowd." But then again, how are you supposed to learn when a certain piece of technology should be applied, other than apply it and see how it goes? I certainly am not going to just take someone's advice and opinions at face value, unless they can provide hard evidence of why a certain technique results in failure. It requires a certain amount of cognitive dissonance to point to Etsy's 2000 files as a failure while at the same time they are a successful company. Perhaps these bits of incidental complexity we like to obsess over aren't, in the grand scheme of things, all that important?
To me, the notion of having strong opinions on particular techniques for web application (or web site) development strikes me as an example of a subfield having an over-inflated sense of self. Web app development and javascript frameworks are a tiny, tiny niche of software engineering which is an even smaller niche of the entire area of the medium of computing. And it's not a particularly interesting one, imho. I'm not sure why so much ink is spilled on it other than it's easy to form opinions around.
> I'm not sure why so much ink is spilled on it other than it's easy to form opinions around.
Because people read it. And people read it because social cues are important.
When you're going to commit hundreds (or thousands) of man hours to building a system you may have to maintain for years based on very, very partial knowledge, you want to know what other people think about the tools you plan to use.
> It can be summed up as "don't use technology without understanding it, and don't blindly follow the crowd." But then again, how are you supposed to learn when a certain piece of technology should be applied, other than apply it and see how it goes?
That's called research. You do it on your own time, while using more mature & proven solutions for daily work. Play with it, review all the source code. Like it or not, it's going to be your code, once you start using it in earnest. Don't treat it like a friendly black box.
Is that too much to do? Then you're going to have a lot of surprises. Maybe that's fun for you, but I have to say it gets really old over the years.
> It requires a certain amount of cognitive dissonance to point to Etsy's 2000 files as a failure while at the same time they are a successful company.
Er, no, it doesn't require cognitive dissonance at all. In fact, the folks at Etsy blogged about the issues themselves.
It's a common thing for a fast moving, highly successful project to accumulate gunk under the excuse of "it works for now, we'll clean it up later". But, if the clean up never comes, that gunk eventually becomes a noticeable drag on daily development.
> And it's not a particularly interesting one, imho. I'm not sure why so much ink is spilled on it other than it's easy to form opinions around.
If it's not particularly interesting, then why are you reading about it and responding to it? :)
If you are a developer, you shouldn't rely on Javascript. Yes, feel free to use Javascript to make things sexier, but the use cases where a web page should require Javascript to render properly has substantial overlap with the "we didn't think things through" slice of the Venn diagram.
If your page is unreadable with NoScript and RequestPolicy turned up to maximum paranoia, your webpage is broken.
If you're building a client-side application rather than just a web page, then it's okay if turning JS off breaks your app.
> “All modern websites, even server-rendered ones, need JavaScript” — No. they do not. They all can become better when enhanced with JS.
This. Surfing with uBlock with strict javascript settings leads to a surprising amount of broken pages that do nothing with javascript that is mandatory. From graphical gimmicks that get stuck covering the page to breaking buttons on search forms, there is a lot of stuff that could work just fine, without the scripts adding any real benefit.
I get that for more complex use cases there are tradeoffs to be made (e.g. implementing a server-side AND a java-script version of functions, adding dynamic content to staticly hosted pages, ...), but there also are a lot of low-hanging fruit.
I can't count now the number of pages I've visited that return nothing but a blank screen with scripting disabled. I've become used to it recently, but it's often for sites without much going on. It's a little ridiculous.
The incredible complexity of the machinery behind many rather plain web sites is embarrassing. There are a lot of web pages which would look exactly the same if implemented in HTML with no Javascript. Except that they'd load faster.
When I make web apps today, the HTTP is only used to serve the client application. Basically, it would be equivalent of
wget http://mydomain/myapp
make
run
But with the benefit of running in a (browser's) protected environment .. The above would be OK in linux running jailed, but a possible catastrophe in Windows where you basically run everything with root (although that has and will probably change in later Windows releases).
Once the "client" has been downloaded and running, all communications are done via Websockets. So it's no longer a HTTP-application or "REST".
What makes it so convenient is that the user can both download and run the application in ONE click (no install, yet cached). Without worrying about malware, so I don't need their thrust. And the client can run on basically all devices and OS's, so no need for porting.
“Installing things has gotten so fast and painless. Why not skip it entirely, and make a phone that has every app “installed” already and just downloads and runs them on the fly?”
The react conference videos are great. The technology seems awesome. But there was one quip that caught me off guard, the presenter said something like: "if your application still has urls to back up every action, like it's the 90s...," I instantly thought about how and why that would be a bad thing and how and why progressive enhancement is a great thing. I look forward to a time where JavaScript performance is as "free" as CSS, but we are no where near it.
The quote in context was referring to how thanks to JavaScript rendering server-side and client-side using the same routing, where every action had a URL (like the 90s), you could then browse the site powered by JS, without JS or until it actually loads. It went on to talk about how cool URIs don't change ... JavaScript performance is thus free because content is delivered in pre-rendered HTML, then JS is loaded for further manipulation later, and if it doesn't load in time, the site still works. The "like the 90s" comment refers to using JS to behave more like static webpages do, which is still a novelty for some single page apps. The funny part is we probably don't even know how common it is -- if a website doesn't break our understanding of the web and also renders server-side, how would most know that JavaScript is even involved?
> I look forward to a time where JavaScript performance is as "free" as CSS, but we are no where near it.
Actually, it seems the converse is true. Ray Cromwell recently commented[1] regarding Google Inbox:
> It's nothing to do with JS code execution, the problem is in the rendering engine, essentially too much repainting and rendering stalls, and too little GPU accelerated animation paths.
> Javascript is in a very good state these days in terms of cross browser compatibility, and while CSS support has converged between browsers in terms of correctness, CSS support has not converged on performance between browsers. Safari, Chrome Firefox, and IE have wildly different animation performance hazards on the same markup and debugging this often isn't trivial, for example, finding out that the GPU is stalling on texture uploads deep inside of the render loop.
The problem with web development isn't that there is a resurgence of bad development practices. It's that the average web developer isn't as skilled as what they need to be.
The average U.S. worker spends 4.6 years in a given career, and yet it takes ~5 years to master a framework. The average computer science grad earns much more than a web developer, so the skill set required for proper development is often lacking. Add into the mix cheap offshore labour, poorly made "out of the box" web packages aimed at medium-small business, and inexperienced "geeks" who build poor websites on the cheap.
Given the skill set required for professional level web development is on par with software development, it's no surprise the role isn't getting the skilled people the career requires.
> It's that the average web developer isn't as skilled as what they need to be.
The problem isn't skills, the problem is developers wanting to use their shiny new toys just to show off.It has nothing to do with skills. It's how people use the tools they have at their disposal.
A larger problem in my opinion is the victory of HTML5 against XHTML2. XHTML2 would have solved a whole lot of problems developers are still trying to solve by abusing javascript or writing specs like web components that just look like "the vengeance of XHTML2".
>The problem with web development isn't that there is a resurgence of bad development practices. It's that the average web developer isn't as skilled as what they need to be.
5 years of experience with angular wouldn't teach you that there are certain use cases where it should never be used.
> it would be interesting to learn what lead to over 400,000 lines of CSS in over 2000 files for an actually not that complex site in the first place
This is so common. Lot of time, they use CSS to change the design of the module, which is how it should be. But when they revert the look of the module, instead of deleting CSS, they add new rules to overwrite existing CSS. Which leads to specificity fight and !important.
At my previous job, site was rendering on server. It was old, and we wanted to redesign it. I wanted to keep rendering on the server. CTO and VP wanted to build Angular site because everyone is moving to Angular and its easy to find Angular developers. Company's 30+% of new visitors comes through SEO.
> At my previous job, site was rendering on server. It was old, and we wanted to redesign it. I wanted to keep rendering on the server. CTO and VP wanted to build Angular site because everyone is moving to Angular and its easy to find Angular developers. Company's 30+% of new visitors comes through SEO.
I think I missed the point of your second paragraph, I'd be interested to know what you meant to say. (currently working with Angular on a small hybrid app and thought about experimenting with it on the web but SEO concerns were the first thing that came to mind, so it feels like a relevant piece to me!)
>At my previous job, site was rendering on server. It was old,
>and we wanted to redesign it. I wanted to keep rendering on
>the server. CTO and VP wanted to build Angular site because
>everyone is moving to Angular and its easy to find Angular
>developers. Company's 30+% of new visitors comes through SEO.
IMV most people don't understand the tradeoff's they're making when they adopt Angular - I'm seeing plenty of sites that have huge start up times due to them using Angular when they could equally achieve a better experience without it.
(This is a reply to a nested comment, but I felt it was important enough to be top-level):
> and pretending that a site is "broken" if it doesn't work for a handful of geeks is being willfully obtuse.
This is the common rallying cry of the 'use-JS-for-everything' camp. The most important reason a user should have an internet-wide blacklist on JS as the default is because the default should NOT be to allow any website ever to run arbitrary scripts on a user's machine.
That's like always running as root while on a development server. No, wait - that's like having random people from the street have root on your server for arbitrary amounts of time, and all you can do is watch. Principle of least privilege certainly applies to the web as well.
Let's not even mention how an open and private web is hindered by orders of magnitude from Google analytics and Facebook like buttons tracking you across entirely different spectra of websites simply because the sites you're visiting has them embedded.
---
EDIT: Can the downvoters please reply with constructive criticism.
I think it's worth noting the difference between a website and an application. I like it when docs, news sites, blogs, etc. work without JavaScript. I consider it very important that my own blog work without JS.
If I'm making a game that's not turn-based, going JS-free is literally impossible.
For a lot of applications you could in theory make a JS-free version, but it would require making a COMPLETELY separate implementation of the application, and for a lot of people that's just not justifiable. Most people simply don't have the time and resources to achieve this, so obviously they'll favor the larger chunk of users.
For example, let's say I want to make an image editor. I can imagine some ways in which I could possibly implement certain functionality without any JS, but the experience would be ABYSMAL. Seriously, consider implementing even a MICROSCOPIC subset of the functionality provided by Photoshop with JUST server-side rendering.
I always take one of two extremes: either the website has to work without JS so I need to make it so that everything works when JS is disabled, or I'm going to assume JS is enabled and I'll be going all-out.
If you're making a website, I'd say you should try to make it work without JavaScript, and in a lot of cases it can be achieved without that much effort. Blogs, news sites, docs, etc. are typically easy.
However, if you're making an application, I'd argue that it's pretty much impossible to do it without JS. It's possible if you're willing to implement your application multiple times (once in a JS-heavy way, and once in a JS-free way), but that's not feasible for most people. The other possibility is implementing it in a way that's friendlier to JS-free users, but for any non-trivial application that'll lead to a really shitty experience for most users. You just can't do sophisticated interactions when you're making something JS-free.
I haven't written a single-page application, but when I published one of my websites last year, I wanted to make sure that as much of the functionality as possible was available for users with Javascript disabled or unavailable. My target audience includes users of cheap phones in developing countries, and while I have heard a lot of developers say "less than 1% of my users don't use Javascript", I believe the numbers are a lot higher in developing countries. I cannot remember the estimates off the top of my head, but I remember it being a significant number. While cheap smartphones should shrink the gap in the future, I didn't want to be contributing to a systemic bias against users in developing countries. It's important to note, though, that my website was a side-project with no profit-motive, so I didn't have a deadline to meet and spent plenty of time testing the site for those edge cases.
You should always know your audience. The problem I feel is that some recommendations says something like "always make apps that work without javascript". If you know that about 99% of your users have javascript that may cost a lot for the last percentage. It is also diffrent for public service apps from authorities that people are required to use and a new product/service you want to sell but where there probably are or might be competition that does things differently.
Christian Heilmann makes a lot of thinking errors here:
1) Because a technology fails in the hands of amateurs or learners, doesn't mean the technology is bad.
2) He assumes webdeverlopers don't think about the consequences of a JavaScript only website. In my experience, that's not the case.
3) The fact that there's a lot of talk about JavaScript frameworks does not mean webdevelopers are less interested in the end product. It means JavaScript and everything around it is in flux and improving every month. And the decision which framework to pick is very important. It can mean the difference between a stalled or thriving end product one year later.
I still fail to see why we should not use Javascript for everything.
> "That is the great thing about web technology. It isn’t clean or well designed by a long shot — but it is extensible and it can learn from many products built with it."
That _was_ the great thing about web technology: 'worse is better', but it is no longer an aspirational goal. As a developer I need a well-designed web to build on, and these modern frameworks are doing exactly that.
> If we do everything client-side we do not only need to deliver innovative, new interfaces. We also need to replicate the already existing functionality the web gives us. A web site that takes too long makes the browser show a message that the site is not available and the user can re-try. When the site loads, I see a spinner. Every time we replace this with a client-side call, we need to do a lot of UX work to give the user exactly the same functionality.
There is no "Loading" spinner anymore on a well-built JS-heavy application. Good frameworks (React, Ember with FastBoot) use Javascript on the server to send a fully rendered HTML to the client. That works. And they rehydrate this HTML with JSON data and client-side logic so that any further JS interaction is smooth and can be done using the conveniences of the framework.
But if were to follow the gist of what the article is trying to tell us, we should instead be rendering HTML using typical server-side technologies, and use progressive enhancement to add dynamism in the client. This is not a good solution: we have to duplicate rendering logic on both the client and server using two completely different stacks. It is a lot of cognitive load, needs duplication of effort, and is a maintenance nightmare.
A better solution is to simply render everything using Javascript, and remember that Javascript is no longer a client-side technology. Use the same Javascript to render contents both on the server and the client and rehydrate the rendered HTML transparently on the client.
I also disagree with the author's implied assertion that people who use conveniences like SASS do it more because of incompetence in wielding CSS. The kinds of abstractions CSS promotes are selectors, specificity, and cascade. They are not the kind of abstractions one needs to build a maintainable and reusable body of code. We programmers know what they are: variables, modules, objects, control structures, expressions.. SASS provides some of those missing pieces: variables, mixins, conditionals and expressions. In fact SASS pushes CSS closer to a programming language than CSS is, and that is a good thing.
The article closes on this note:
> A lot of our work on the web goes pear-shaped in maintenance. This is not a technology issue, but a training one. And this is where I am worried about the nearer future when maintainers need to know all the abstractions used in our products that promise to make maintenance much easier. It is tough enough to find people to hire to build and maintain the things we have now. The more abstraction we put in, the harder this will get.
I'm as close to an abstraction-hater as the next person. But there are good abstractions and bad ones. Mutating DOM directly using spaghetti Javascript? That is just no abstraction. Once we understand the rendered view as a function of state, then it makes sense to have abstractions based on that idea (like one-way or two-way data binding).
The need for training is not a point of contention. But assume we have competent developers working in good faith, and they still find it hard to write well-maintainable code for the web, then it has to be something else that is broken. Having used these newfangled frameworks for a while now, I do think that is very much the case. Anyone should be able to put together reasonably written web-apps without being masters at the craft, but it has so far been hard, not because of people, but because of tools.
I'm not an expert, but I know that server-side JS is completely feasible approach here. However, it may truly not be the best tool for the job. Client-side JS being required is a fundamental problem for all sorts of reasons, such as accessibility and more.
One issue is: plain old HTML and CSS are better tools for regular people to deal with to lower the barriers to wider participation in the web. Having to learn JS to make sense of a website is an extra burden.
To me, it comes down to: see all the complaints about brokenness when client-side JS fails or is off. Building things with client-side JS as a requirement is unacceptable. Now, ignore client-side. Can you actually make the case that server-side JS is superior to other frameworks and languages? It shouldn't be used just because it's the same language as the client-side enhancements. If it's not the best option server-side, go with the better options instead. But I don't know enough myself to make that judgment, I'm just talking about the way to think about it.
> I still fail to see why we should not use Javascript for everything.
From time to time, I found the idea in Gary Bernhardt's talk, The Birth and Death of JavaScript /pronounced as yah-wa-skript/, despite being very hilarious per se, was also very insightful.
[+] [-] gfodor|11 years ago|reply
To me, the notion of having strong opinions on particular techniques for web application (or web site) development strikes me as an example of a subfield having an over-inflated sense of self. Web app development and javascript frameworks are a tiny, tiny niche of software engineering which is an even smaller niche of the entire area of the medium of computing. And it's not a particularly interesting one, imho. I'm not sure why so much ink is spilled on it other than it's easy to form opinions around.
[+] [-] derf_|11 years ago|reply
Because people read it. And people read it because social cues are important.
When you're going to commit hundreds (or thousands) of man hours to building a system you may have to maintain for years based on very, very partial knowledge, you want to know what other people think about the tools you plan to use.
[+] [-] lmorchard|11 years ago|reply
That's called research. You do it on your own time, while using more mature & proven solutions for daily work. Play with it, review all the source code. Like it or not, it's going to be your code, once you start using it in earnest. Don't treat it like a friendly black box.
Is that too much to do? Then you're going to have a lot of surprises. Maybe that's fun for you, but I have to say it gets really old over the years.
> It requires a certain amount of cognitive dissonance to point to Etsy's 2000 files as a failure while at the same time they are a successful company.
Er, no, it doesn't require cognitive dissonance at all. In fact, the folks at Etsy blogged about the issues themselves.
It's a common thing for a fast moving, highly successful project to accumulate gunk under the excuse of "it works for now, we'll clean it up later". But, if the clean up never comes, that gunk eventually becomes a noticeable drag on daily development.
> And it's not a particularly interesting one, imho. I'm not sure why so much ink is spilled on it other than it's easy to form opinions around.
If it's not particularly interesting, then why are you reading about it and responding to it? :)
[+] [-] sarciszewski|11 years ago|reply
If you are a developer, you shouldn't rely on Javascript. Yes, feel free to use Javascript to make things sexier, but the use cases where a web page should require Javascript to render properly has substantial overlap with the "we didn't think things through" slice of the Venn diagram.
If your page is unreadable with NoScript and RequestPolicy turned up to maximum paranoia, your webpage is broken.
If you're building a client-side application rather than just a web page, then it's okay if turning JS off breaks your app.
[+] [-] detaro|11 years ago|reply
This. Surfing with uBlock with strict javascript settings leads to a surprising amount of broken pages that do nothing with javascript that is mandatory. From graphical gimmicks that get stuck covering the page to breaking buttons on search forms, there is a lot of stuff that could work just fine, without the scripts adding any real benefit.
I get that for more complex use cases there are tradeoffs to be made (e.g. implementing a server-side AND a java-script version of functions, adding dynamic content to staticly hosted pages, ...), but there also are a lot of low-hanging fruit.
[+] [-] Springtime|11 years ago|reply
[+] [-] jschwartzi|11 years ago|reply
[+] [-] Animats|11 years ago|reply
[+] [-] z3t4|11 years ago|reply
Once the "client" has been downloaded and running, all communications are done via Websockets. So it's no longer a HTTP-application or "REST".
What makes it so convenient is that the user can both download and run the application in ONE click (no install, yet cached). Without worrying about malware, so I don't need their thrust. And the client can run on basically all devices and OS's, so no need for porting.
[+] [-] teddyh|11 years ago|reply
https://xkcd.com/1367/
[+] [-] emehrkay|11 years ago|reply
[+] [-] lstamour|11 years ago|reply
[+] [-] justinmk|11 years ago|reply
Actually, it seems the converse is true. Ray Cromwell recently commented[1] regarding Google Inbox:
> It's nothing to do with JS code execution, the problem is in the rendering engine, essentially too much repainting and rendering stalls, and too little GPU accelerated animation paths.
> Javascript is in a very good state these days in terms of cross browser compatibility, and while CSS support has converged between browsers in terms of correctness, CSS support has not converged on performance between browsers. Safari, Chrome Firefox, and IE have wildly different animation performance hazards on the same markup and debugging this often isn't trivial, for example, finding out that the GPU is stalling on texture uploads deep inside of the render loop.
[1] https://news.ycombinator.com/item?id=8999716
[+] [-] pippy|11 years ago|reply
The average U.S. worker spends 4.6 years in a given career, and yet it takes ~5 years to master a framework. The average computer science grad earns much more than a web developer, so the skill set required for proper development is often lacking. Add into the mix cheap offshore labour, poorly made "out of the box" web packages aimed at medium-small business, and inexperienced "geeks" who build poor websites on the cheap.
Given the skill set required for professional level web development is on par with software development, it's no surprise the role isn't getting the skilled people the career requires.
[+] [-] adamnemecek|11 years ago|reply
Are we talking about web frameworks? Because it sure doesn't take 5 years to master most web frameworks.
[+] [-] nerfhammer|11 years ago|reply
kinda hard when frameworks completely refactor themselves or fall out of style within 18 months
[+] [-] aikah|11 years ago|reply
The problem isn't skills, the problem is developers wanting to use their shiny new toys just to show off.It has nothing to do with skills. It's how people use the tools they have at their disposal.
A larger problem in my opinion is the victory of HTML5 against XHTML2. XHTML2 would have solved a whole lot of problems developers are still trying to solve by abusing javascript or writing specs like web components that just look like "the vengeance of XHTML2".
[+] [-] crdoconnor|11 years ago|reply
5 years of experience with angular wouldn't teach you that there are certain use cases where it should never be used.
[+] [-] thekingshorses|11 years ago|reply
This is so common. Lot of time, they use CSS to change the design of the module, which is how it should be. But when they revert the look of the module, instead of deleting CSS, they add new rules to overwrite existing CSS. Which leads to specificity fight and !important.
At my previous job, site was rendering on server. It was old, and we wanted to redesign it. I wanted to keep rendering on the server. CTO and VP wanted to build Angular site because everyone is moving to Angular and its easy to find Angular developers. Company's 30+% of new visitors comes through SEO.
[+] [-] IkmoIkmo|11 years ago|reply
I think I missed the point of your second paragraph, I'd be interested to know what you meant to say. (currently working with Angular on a small hybrid app and thought about experimenting with it on the web but SEO concerns were the first thing that came to mind, so it feels like a relevant piece to me!)
[+] [-] youngtaff|11 years ago|reply
IMV most people don't understand the tradeoff's they're making when they adopt Angular - I'm seeing plenty of sites that have huge start up times due to them using Angular when they could equally achieve a better experience without it.
[+] [-] ianlevesque|11 years ago|reply
[+] [-] chrisdotcode|11 years ago|reply
> and pretending that a site is "broken" if it doesn't work for a handful of geeks is being willfully obtuse.
This is the common rallying cry of the 'use-JS-for-everything' camp. The most important reason a user should have an internet-wide blacklist on JS as the default is because the default should NOT be to allow any website ever to run arbitrary scripts on a user's machine.
That's like always running as root while on a development server. No, wait - that's like having random people from the street have root on your server for arbitrary amounts of time, and all you can do is watch. Principle of least privilege certainly applies to the web as well.
Let's not even mention how an open and private web is hindered by orders of magnitude from Google analytics and Facebook like buttons tracking you across entirely different spectra of websites simply because the sites you're visiting has them embedded.
---
EDIT: Can the downvoters please reply with constructive criticism.
[+] [-] TheAceOfHearts|11 years ago|reply
If I'm making a game that's not turn-based, going JS-free is literally impossible. For a lot of applications you could in theory make a JS-free version, but it would require making a COMPLETELY separate implementation of the application, and for a lot of people that's just not justifiable. Most people simply don't have the time and resources to achieve this, so obviously they'll favor the larger chunk of users.
For example, let's say I want to make an image editor. I can imagine some ways in which I could possibly implement certain functionality without any JS, but the experience would be ABYSMAL. Seriously, consider implementing even a MICROSCOPIC subset of the functionality provided by Photoshop with JUST server-side rendering.
[+] [-] TheAceOfHearts|11 years ago|reply
If you're making a website, I'd say you should try to make it work without JavaScript, and in a lot of cases it can be achieved without that much effort. Blogs, news sites, docs, etc. are typically easy.
However, if you're making an application, I'd argue that it's pretty much impossible to do it without JS. It's possible if you're willing to implement your application multiple times (once in a JS-heavy way, and once in a JS-free way), but that's not feasible for most people. The other possibility is implementing it in a way that's friendlier to JS-free users, but for any non-trivial application that'll lead to a really shitty experience for most users. You just can't do sophisticated interactions when you're making something JS-free.
[+] [-] MattHeard|11 years ago|reply
[+] [-] jbergens|11 years ago|reply
[+] [-] edwinjm|11 years ago|reply
1) Because a technology fails in the hands of amateurs or learners, doesn't mean the technology is bad.
2) He assumes webdeverlopers don't think about the consequences of a JavaScript only website. In my experience, that's not the case.
3) The fact that there's a lot of talk about JavaScript frameworks does not mean webdevelopers are less interested in the end product. It means JavaScript and everything around it is in flux and improving every month. And the decision which framework to pick is very important. It can mean the difference between a stalled or thriving end product one year later.
[+] [-] jasim|11 years ago|reply
> "That is the great thing about web technology. It isn’t clean or well designed by a long shot — but it is extensible and it can learn from many products built with it."
That _was_ the great thing about web technology: 'worse is better', but it is no longer an aspirational goal. As a developer I need a well-designed web to build on, and these modern frameworks are doing exactly that.
> If we do everything client-side we do not only need to deliver innovative, new interfaces. We also need to replicate the already existing functionality the web gives us. A web site that takes too long makes the browser show a message that the site is not available and the user can re-try. When the site loads, I see a spinner. Every time we replace this with a client-side call, we need to do a lot of UX work to give the user exactly the same functionality.
There is no "Loading" spinner anymore on a well-built JS-heavy application. Good frameworks (React, Ember with FastBoot) use Javascript on the server to send a fully rendered HTML to the client. That works. And they rehydrate this HTML with JSON data and client-side logic so that any further JS interaction is smooth and can be done using the conveniences of the framework.
But if were to follow the gist of what the article is trying to tell us, we should instead be rendering HTML using typical server-side technologies, and use progressive enhancement to add dynamism in the client. This is not a good solution: we have to duplicate rendering logic on both the client and server using two completely different stacks. It is a lot of cognitive load, needs duplication of effort, and is a maintenance nightmare.
A better solution is to simply render everything using Javascript, and remember that Javascript is no longer a client-side technology. Use the same Javascript to render contents both on the server and the client and rehydrate the rendered HTML transparently on the client.
I also disagree with the author's implied assertion that people who use conveniences like SASS do it more because of incompetence in wielding CSS. The kinds of abstractions CSS promotes are selectors, specificity, and cascade. They are not the kind of abstractions one needs to build a maintainable and reusable body of code. We programmers know what they are: variables, modules, objects, control structures, expressions.. SASS provides some of those missing pieces: variables, mixins, conditionals and expressions. In fact SASS pushes CSS closer to a programming language than CSS is, and that is a good thing.
The article closes on this note:
> A lot of our work on the web goes pear-shaped in maintenance. This is not a technology issue, but a training one. And this is where I am worried about the nearer future when maintainers need to know all the abstractions used in our products that promise to make maintenance much easier. It is tough enough to find people to hire to build and maintain the things we have now. The more abstraction we put in, the harder this will get.
I'm as close to an abstraction-hater as the next person. But there are good abstractions and bad ones. Mutating DOM directly using spaghetti Javascript? That is just no abstraction. Once we understand the rendered view as a function of state, then it makes sense to have abstractions based on that idea (like one-way or two-way data binding).
The need for training is not a point of contention. But assume we have competent developers working in good faith, and they still find it hard to write well-maintainable code for the web, then it has to be something else that is broken. Having used these newfangled frameworks for a while now, I do think that is very much the case. Anyone should be able to put together reasonably written web-apps without being masters at the craft, but it has so far been hard, not because of people, but because of tools.
[+] [-] unknown|11 years ago|reply
[deleted]
[+] [-] quadrangle|11 years ago|reply
One issue is: plain old HTML and CSS are better tools for regular people to deal with to lower the barriers to wider participation in the web. Having to learn JS to make sense of a website is an extra burden.
To me, it comes down to: see all the complaints about brokenness when client-side JS fails or is off. Building things with client-side JS as a requirement is unacceptable. Now, ignore client-side. Can you actually make the case that server-side JS is superior to other frameworks and languages? It shouldn't be used just because it's the same language as the client-side enhancements. If it's not the best option server-side, go with the better options instead. But I don't know enough myself to make that judgment, I'm just talking about the way to think about it.
[+] [-] ttflee|11 years ago|reply
From time to time, I found the idea in Gary Bernhardt's talk, The Birth and Death of JavaScript /pronounced as yah-wa-skript/, despite being very hilarious per se, was also very insightful.
[+] [-] quadrangle|11 years ago|reply
[+] [-] TheAceOfHearts|11 years ago|reply
[deleted]
[+] [-] unknown|11 years ago|reply
[deleted]