So, in other words, the web is rediscovering client/server programming. Welcome to 1990, everyone!
More seriously, though, I think a lot of the reason for the historical tendency towards server side rendering has been:
* Clients were slow: remember, until V8 raised the standard for speed, Javascript VMs were slow. So doing a lot of rendering or data processing in Javascript was a bad idea, since it negatively affected the user's experience with their entire computer, not just your site.
* DOM Manipulation is hard and inconsistent between browsers: Yes, JQuery and friends hide the native DOM API. But we have to remember that it wasn't always the case. Rich client-side libraries are all quite young, comparitively.
* Offline capabilites were non-existent: Even today, the HTML5 specification, as implemented, only grants you a fraction of the data storage and retrieval capabilities available to even the most sandboxed application. And previously, these capabilities simply weren't there - if your app needed to store data, it had to store data server side and tag the client with a cookie or some other for identification to keep track of which data belonged to whom.
Yes, there is progress being made on all three fronts, but you can't expect developers to throw away years of best practices in a day, then immediately turn around and write HTML5 webapps. I think the evolution towards a more balanced computing model (where more of the computational load is being handled by the client) is ongoing, and will accelerate as browsers become more capable.
It's also a change in mindset in the 'developer community'. Up until let's say 5 years ago, Javascript was considered plain dirty. Javascript was a non-portable hack that was only to be touched in the most extreme circumstances. Websites that didn't work on Lynx were reviled. Oh how things have changed, and yet have stayed the same at that same time (i.e., your 'welcome to 1990' comment).
On a related note, the last couple of weeks I've been working on rewriting an old (2000-era) web app, originally written in Visual Interdev. Visual Interdev was widely derided back then, it was something only Visual Basic 'programmers' (the chaff of the programming community) would touch. Turns out that many things it did are a lot more popular nowadays - client side calculations, validation, dynamically updating the UI, etc. Of course there was no XMLHttpRequest, so by modern standards it was quite limited; still, it's funny how something so derided back then just turned out to be ahead of its time.
Is not strictly client/server, not at least in the sense of thin clients. I think of it as software that also takes advantage of the network (by using its internal client to interact with the server).
What we are really discovering here is that we can do software that uses the (big) network.
The opportunity is in that, differently from the past, this time we suck less at the UX design.
Younger developers appear to reject their senior colleagues's way of doing things and prefer their grand-senior colleaguues' way. That reminds me Structure of Scientific Revolutions from Thomas Kuhn where scientific developments only occur when an older generation passes away.
"remember, until V8 raised the standard for speed, Javascript VMs were slow."
Actually, it was Apple, not Google, that kicked off the JS performance wars. Apple and Mozilla were deep into JIT tech before anyone knew about the existence of Chrome.
Offline first is almost magical for end-users, since this is something quite contradictory to how people perceive the web. Having said that, it's also incredibly difficult to get it right.
As others have pointed out, the major problem I face is to inform the users that some features are not available when they are offline (e.g. file uploads). If you're not careful with the user experience, this leads to lots of support headache.
Another problem is dealing with legacy client data. If you are storing anything at all in the local storage, you need to realize that it's going to be always there once you write it. So handling with data migrations on the client storage becomes very important, and definitely must be thought ahead.
For most apps (which derive their usefulness from dynamic content), what's the point? Is an app that loads but does nothing really worth all the extra development time?
That being said, if you're already making a mobile app or API to go with your site, it does make sense to decouple the frontend (HTML/Cocoa/whatever) from the backend so you only write the backend once. Best way to have an up-to-date, useful API is to use it yourself.
Its definitely worthwhile, web apps are becoming richer all the time. Even if new content can't be retrieved, as a user I expect to be able to access/consume data the app has already presented.
I'd argue the development is no more difficult either, in fact it lends itself to a style that is easier to unit test.
I'm planning on queuing up updates when offline for my app. On the technical side of things, it's incredibly easy when done from the ground up. In fact I think it's made the entire thing much cleaner and far easier to develop.
The harder part seems to be communicating with the user. If/how to let them know they've gone offline, and what that means for their experience, and if they're allowed to create/update content, what will happen when they reconnect.
For example, the next question for me is what to do with conflicting updates, ie when a user updates a document offline and reconnects, and the document has been updated by a different client while they were offline. Discarding or merging isn't a problem from a technical stand point, the problem is presenting it to the user, and striking a balance between doing what the user wants/expects, and not bothering the user with a ton of questions about which of their changes they want to keep/merge.
It's still quite tricky to to detect a reliable 'offline' state with HTML5. Using navigator.onLine tells you that you 'might have' internet access but even then you may actually be offline. Bu I agree that the most important issue is how to communicate the degradation to the end user.
People are consuming more content on mobile year on year, and using offline storage and capabilities have been around for the past year or two; how this is only just bubbling at the surface at conferences is beyond me.
Take THE white elephant example; Facebook mobile. The shift is evident, my girlfriend doesn't use a traditional desktop/laptop anymore she consumes all her Facebook glutton through her mobile. If it takes more than 5 seconds to load she gives up.
There is a grey gap in knowledge of accepted and logical practice with what stays online or offline but for example the whole of Facebook's UI could be cached on a mobile and just reload the updated news/feeds.
I stopped using the facebook GUI after they changed the adress books of millions of iOS users. I assumed this trend would continue. Granted, now I just don't look at FB at all so maybe you're right.
quanticle|13 years ago
More seriously, though, I think a lot of the reason for the historical tendency towards server side rendering has been:
* Clients were slow: remember, until V8 raised the standard for speed, Javascript VMs were slow. So doing a lot of rendering or data processing in Javascript was a bad idea, since it negatively affected the user's experience with their entire computer, not just your site.
* DOM Manipulation is hard and inconsistent between browsers: Yes, JQuery and friends hide the native DOM API. But we have to remember that it wasn't always the case. Rich client-side libraries are all quite young, comparitively.
* Offline capabilites were non-existent: Even today, the HTML5 specification, as implemented, only grants you a fraction of the data storage and retrieval capabilities available to even the most sandboxed application. And previously, these capabilities simply weren't there - if your app needed to store data, it had to store data server side and tag the client with a cookie or some other for identification to keep track of which data belonged to whom.
Yes, there is progress being made on all three fronts, but you can't expect developers to throw away years of best practices in a day, then immediately turn around and write HTML5 webapps. I think the evolution towards a more balanced computing model (where more of the computational load is being handled by the client) is ongoing, and will accelerate as browsers become more capable.
roel_v|13 years ago
On a related note, the last couple of weeks I've been working on rewriting an old (2000-era) web app, originally written in Visual Interdev. Visual Interdev was widely derided back then, it was something only Visual Basic 'programmers' (the chaff of the programming community) would touch. Turns out that many things it did are a lot more popular nowadays - client side calculations, validation, dynamically updating the UI, etc. Of course there was no XMLHttpRequest, so by modern standards it was quite limited; still, it's funny how something so derided back then just turned out to be ahead of its time.
sebastianconcpt|13 years ago
What we are really discovering here is that we can do software that uses the (big) network.
The opportunity is in that, differently from the past, this time we suck less at the UX design.
diminish|13 years ago
asadotzler|13 years ago
Actually, it was Apple, not Google, that kicked off the JS performance wars. Apple and Mozilla were deep into JIT tech before anyone knew about the existence of Chrome.
karterk|13 years ago
As others have pointed out, the major problem I face is to inform the users that some features are not available when they are offline (e.g. file uploads). If you're not careful with the user experience, this leads to lots of support headache.
Another problem is dealing with legacy client data. If you are storing anything at all in the local storage, you need to realize that it's going to be always there once you write it. So handling with data migrations on the client storage becomes very important, and definitely must be thought ahead.
zalew|13 years ago
cache https://webcache.googleusercontent.com/search?q=cache:http:/...
btw a worthwhile read http://www.alistapart.com/articles/application-cache-is-a-do...
gkoberger|13 years ago
That being said, if you're already making a mobile app or API to go with your site, it does make sense to decouple the frontend (HTML/Cocoa/whatever) from the backend so you only write the backend once. Best way to have an up-to-date, useful API is to use it yourself.
joelambert|13 years ago
I'd argue the development is no more difficult either, in fact it lends itself to a style that is easier to unit test.
mwill|13 years ago
The harder part seems to be communicating with the user. If/how to let them know they've gone offline, and what that means for their experience, and if they're allowed to create/update content, what will happen when they reconnect.
For example, the next question for me is what to do with conflicting updates, ie when a user updates a document offline and reconnects, and the document has been updated by a different client while they were offline. Discarding or merging isn't a problem from a technical stand point, the problem is presenting it to the user, and striking a balance between doing what the user wants/expects, and not bothering the user with a ton of questions about which of their changes they want to keep/merge.
joelambert|13 years ago
jchrisa|13 years ago
culshaw|13 years ago
Take THE white elephant example; Facebook mobile. The shift is evident, my girlfriend doesn't use a traditional desktop/laptop anymore she consumes all her Facebook glutton through her mobile. If it takes more than 5 seconds to load she gives up.
There is a grey gap in knowledge of accepted and logical practice with what stays online or offline but for example the whole of Facebook's UI could be cached on a mobile and just reload the updated news/feeds.
flyinRyan|13 years ago
adjustafresh|13 years ago