(no title)
workhere-io | 11 years ago
Single page apps can easily be static (static HTML page + static JSON). The point of this would be to decrease the download size for each new page visited by the user.
workhere-io | 11 years ago
Single page apps can easily be static (static HTML page + static JSON). The point of this would be to decrease the download size for each new page visited by the user.
CHY872|11 years ago
Some sites obviously inline CSS or JavaScript, but that can be eliminated if necessary (and only affects the first page load anyway).
This information is free to generate on the server side, so it's not slowing down that computation at all (it's just a stringbuilder function, essentially). Furthermore, the transfer time is generally not the deciding factor - it's the server side time to put the rest of the information together.
To give one example, I went to a typical website - the Guardian (it's a fairly standard high-traffic news website). Chrome informs me that in order to request one article, it took 160ms to load the html - 140ms of waiting and 20ms of downloading. Now, the RTT is about 14ms, so that's about 110ms of generating the web page and 20ms of actually downloading it. It's about 30kB of compressed HTML (150kB uncompressed), most of it's 'static content' - inlined CSS and JS.
Them using the single page model would reduce the page download time (apart from the first page) by an absolute maximum of 20ms - which means that the time to load each page has been reduced by about 12%.
This is fine, but almost all of the data is just the result of string concatenations and formatting - i.e. free processing (or at least almost-free processing). It's getting the rest of the data together that's somehow taking the 100ms (or crap implementations).
The cost of moving data around on websites is typically small compared to the actual production time of the content. That's why we see people preferring to inline huge amounts of CSS etc on each web page and having people download it time after time - because it's only about 10kB compressed the data transfer is inconsequential, and normally is dominated by the RTT.
Spending all the time writing these frameworks because of performance benefits is a fallacy - the data still has to be generated somewhere, and if it happens dynamically it's slow as hell. The savings can never become that great - at most they lead to 20-30ms of improvements if bandwidth is acceptable.
Writing the frameworks because they make development easier is a much more reasonable argument.
This still all detracts away from the fact that non-static websites are typically dog slow and they shouldn't be.