Separate domains for CSS and images files are primarily intended to overcome certain (old?) browsers' limitation on the number of concurrent connections to a single domain [1][2]. Cookieless domains can only help you if you have supermassive cookies. Keep those to a special subdomain, if you really need to have them. But you probably don't.
There is no one rule of thumb for performance. Generally, inline everything that's small enough to inline. If it's big enough such that caching a separate file will help you on subsequent page loads, then put it in a separate file.
Test and measure. If you can't measure the difference, it doesn't matter. If you don't measure the difference, the rule you are following probably isn't helping you.
> Test and measure. If you can't measure the difference, it doesn't matter. If you don't measure the difference, the rule you are following probably isn't helping you.
Most modern browsers are limited to 6 connections per hostname[0] (IE is an exception), and usually there are many more than 6 images/assets, so separate domains still make sense.
Is the "flash of unstyled content" really so "dreaded"?
I've noticed it regularly on the publish side of the Google Play Store.
The chain reaction this sets off in my head is about 100 FUOC's long. And it goes something like this.
Hey, I just saw a FUOC on the Google Play store admin!
They must not be optimizing CSS delivery for above-the-fold content... [0]
Or maybe they just don't lavish the same resources on admin sites as they do on front-end...
I mean, did anyone else see that? And what if they did? What would they think? They wouldn't think anything, because only a web developer would even recognize a FOUC...
And even if someone else saw it and recognized it for what it was... so what?
If I tried to explain to someone what happened in that moment, that Google, yes, Google had failed to expend all the necessary brain power on ensuring that their markup was not rendered a fraction of a second before their stylesheets were parsed...
And that people actually care about that, actually want to protect users from that horrid sight...
I use a lot of high latency connections (my country has rather slow ping times to major internet centres - around 200ms) and sometimes connections just get "stuck" when loading the CSS.
This results in seeing the title of the page with a loading spinner and a blank white page. If browsers actually rendered the page (is CSS really that important on a news site?) I would at least be able to see the content.
Paranoia over the FOUC is the bane of my web browsing days. Any time you use unstable internet (hostels in poor countries), designers' prude paranoia over their naked content being shown without the proper styling or font, it ruins modern web browsing. I stopped timing when every other page I saw was literally > 1 minute of waiting. Started using view-source: to just read the damn content.
Please, for the love of God, stop hiding content behind styles. Please. Please please please.
i'm personally fairly irked about the massive shift towards "cdn all the things!", i have noscript blocking third-party assets and a larger number of sites is broken for me every day.
rather than cdns, there should be an sha or md5 hash sent with every asset, like an etag, so that things need not live on a specific domain to be pulled from cache.
There's an emerging standard for specifying a hash for a resource, and potentially loading it from one of several locations, so long as it matches the given hash: http://www.w3.org/TR/SRI/
Perhaps a silly idea / question, but could browsers support some kind of an optimization meta tag that tells to fetch the resource without sending cookies? something like `<img src="..." data-no-cookies=true>` or even a directive that applies to all static resources unless specified otherwise, e.g. `<meta no-cookies-for="jpg;css;js">` ??
In HTTP/2 (SPDY) the only headers that need to be sent are ones that changed since the last request on the connection. So once that is more common you'll actually be better off just attaching the cookies to all requests since that will mean they only get sent once per TCP connection.
Great post by Jonathan. For the case of a big 'single page' JS app where a most of the rendering happens from Javascript fetched on the page, I'd guess this approach won't help that much since you have to fetch the JS from the CDN/cookieless domain anyways. Still though, definitely worth an experiment to know for sure.
> For the case of a big 'single page' JS app where a most of the rendering happens from Javascript fetched on the page, I'd guess this approach won't help that much since you have to fetch the JS from the CDN/cookieless domain anyways.
If you're taking that approach, you probably don't care much about performance anyways (I've never seen a pure JS SPA which rendered fast).
I've always shirked away from a separate "cookie-less domain" in favor of cookie-less sub-domains (think costs and SSL). This does require you to think about www/no-www (cookie-less sub-domains require you to limit the cookies to the www sub-domain, as no-www means that *.domain inherit all cookies). I wonder if this is still relevant in the HTTP2/SPDY world?
If you terminate SSL at the CDN, if you don't own the network and CDN, won't that leave your data open while in-transit between the CDN and the app servers?
I'm using cloudfront and aws, reluctant to let cloudfront be the root CDN because of this. Anyone got any insight?
Can't speak for Cloudfront, but I do know that Cloudflare has mitigated this problem. When you set up SSL, you have the option to force SSL from end to end. Of course, this approach means that SSL is terminated twice, once at the CDN and again when the user receives it, and this also relies on the assumption that you can trust Cloudflare with your data as it passes through their internal network.
I don't trust this analysis. There's no clear mechanism for why forcing the CSS to the same domain would speed things up. Also, the comparison doesn't truly isolate the same-domain/different-domain decision as the cause of any slowdown. Perhaps, the test server is simply less-loaded/less-laggy/network-closer to his measurement browser... so it's the move fo the CSS to that server, not the unification of source domains, that causes the measured speedup. Or many other things.
Loading CSS from the same domain speeds things up for a couple of reasons:
1. No DNS look up for the second host
2. Many browsers speculatively open a second TCP connection to the original host in anticipation that another request will be made so the TCP negotiation overhead for the second request moves forward
3. CSS is on the critical path for rendering so getting it more quickly improves rendering time
I think a much more interesting concept is prefetching files. Load the link to content with some of the critical files it needs to download, so that the browser can start downloading them before resolving the link.
What do you guys think of having the absolute minimum page with inlined everything (even images) that just shows a simple progress bar - it loads the rest of CSS/JS/whatever.
I've done it once for a large front end app, and it worked pretty well - the user gets an almost instant webpage and sees that the stuff is loading.
One issue is that you should not use any compression (http or tls) on a request/response with any sensitive info, such as session ids or csrf tokens (see beast, crime attacks).
It's easy to turn off compression on your www domain and turn it on on your cdn domain.
So now you're not compressing your css, which would slow the response time, but by how much I can't say. You could still use css minification.
> Here’s the rub: when you put CSS (a static resource) on a cookieless domain, you incur an additional DNS lookup and TCP connection before you start downloading it. Even worse, if your site is served over HTTPS you spend another 1-2 round trips on TLS negotiation
Unless you're using SPDY, using a different domain doesn't add any more TLS overhead than using the same domain, right? I didn't think that browsers reuse connections to the same server.
>Unless you're using SPDY, using a different domain doesn't add any more TLS overhead than using the same domain, right? I didn't think that browsers reuse connections to the same server.
[+] [-] paulsutter|11 years ago|reply
There is no one rule of thumb for performance. Generally, inline everything that's small enough to inline. If it's big enough such that caching a separate file will help you on subsequent page loads, then put it in a separate file.
Test and measure. If you can't measure the difference, it doesn't matter. If you don't measure the difference, the rule you are following probably isn't helping you.
[1] http://www.stevesouders.com/blog/2013/09/05/domain-sharding-...
[2] http://www.stevesouders.com/blog/2008/03/20/roundup-on-paral...
[+] [-] gry|11 years ago|reply
/Yes/.
[+] [-] rimantas|11 years ago|reply
[0] http://www.browserscope.org/?category=network
[+] [-] gavinpc|11 years ago|reply
I've noticed it regularly on the publish side of the Google Play Store.
The chain reaction this sets off in my head is about 100 FUOC's long. And it goes something like this.
Hey, I just saw a FUOC on the Google Play store admin!
They must not be optimizing CSS delivery for above-the-fold content... [0]
Or maybe they just don't lavish the same resources on admin sites as they do on front-end...
I mean, did anyone else see that? And what if they did? What would they think? They wouldn't think anything, because only a web developer would even recognize a FOUC...
And even if someone else saw it and recognized it for what it was... so what?
If I tried to explain to someone what happened in that moment, that Google, yes, Google had failed to expend all the necessary brain power on ensuring that their markup was not rendered a fraction of a second before their stylesheets were parsed...
And that people actually care about that, actually want to protect users from that horrid sight...
I would be seen for what I am, which is a madman.
[0] https://developers.google.com/speed/docs/insights/OptimizeCS...
[+] [-] lucaspiller|11 years ago|reply
This results in seeing the title of the page with a loading spinner and a blank white page. If browsers actually rendered the page (is CSS really that important on a news site?) I would at least be able to see the content.
[+] [-] nothrabannosir|11 years ago|reply
Please, for the love of God, stop hiding content behind styles. Please. Please please please.
[+] [-] leeoniya|11 years ago|reply
rather than cdns, there should be an sha or md5 hash sent with every asset, like an etag, so that things need not live on a specific domain to be pulled from cache.
EDIT: those downvoting, care to state your case?
[+] [-] bbcbasic|11 years ago|reply
Your CDN alternative is not clear (to me at least).
[+] [-] rictic|11 years ago|reply
[+] [-] TazeTSchnitzel|11 years ago|reply
[+] [-] gingerlime|11 years ago|reply
[+] [-] bodyfour|11 years ago|reply
[+] [-] ernestipark|11 years ago|reply
[+] [-] morgante|11 years ago|reply
If you're taking that approach, you probably don't care much about performance anyways (I've never seen a pure JS SPA which rendered fast).
[+] [-] seriocomic|11 years ago|reply
[+] [-] joevandyk|11 years ago|reply
I'm using cloudfront and aws, reluctant to let cloudfront be the root CDN because of this. Anyone got any insight?
[+] [-] firloop|11 years ago|reply
[+] [-] rb2k_|11 years ago|reply
[+] [-] iancarroll|11 years ago|reply
[+] [-] unknown|11 years ago|reply
[deleted]
[+] [-] gojomo|11 years ago|reply
[+] [-] youngtaff|11 years ago|reply
1. No DNS look up for the second host 2. Many browsers speculatively open a second TCP connection to the original host in anticipation that another request will be made so the TCP negotiation overhead for the second request moves forward 3. CSS is on the critical path for rendering so getting it more quickly improves rendering time
[+] [-] vkjv|11 years ago|reply
https://developer.mozilla.org/en-US/docs/Web/HTTP/Link_prefe...
[+] [-] imaginenore|11 years ago|reply
I've done it once for a large front end app, and it worked pretty well - the user gets an almost instant webpage and sees that the stuff is loading.
[+] [-] bbcbasic|11 years ago|reply
[+] [-] kansface|11 years ago|reply
[+] [-] uaygsfdbzf|11 years ago|reply
[+] [-] iopq|11 years ago|reply
[+] [-] ajasmin|11 years ago|reply
[+] [-] bbcbasic|11 years ago|reply
[+] [-] unknown|11 years ago|reply
[deleted]
[+] [-] giantLinux|11 years ago|reply
[deleted]
[+] [-] elchief|11 years ago|reply
It's easy to turn off compression on your www domain and turn it on on your cdn domain.
So now you're not compressing your css, which would slow the response time, but by how much I can't say. You could still use css minification.
[+] [-] bluesmoon|11 years ago|reply
[+] [-] brianpgordon|11 years ago|reply
Unless you're using SPDY, using a different domain doesn't add any more TLS overhead than using the same domain, right? I didn't think that browsers reuse connections to the same server.
[+] [-] TazeTSchnitzel|11 years ago|reply
This isn't new to SPDY: HTTP/1.1 has keep-alive.