The big advantage here over spriting is not having to deal with the complexities of trying to jam repeated background patterns into sprites, and having to compute pixel offsets for your CSS rules. You can just write your CSS as you normally would, and have the plugin generate appropriate versions of your stylesheets before deploying.
If you'd like to see a demonstration of the speed difference it can make, here's a page that loads 100 small individual images:
Considering CSS sprites were something that dhh mentioned as putting into Rails 3.1, maybe you should see if you can just get that moved into Rails core, before the sprites work starts.
The demonstration of speed difference is bogus. Really we should be comparing Jammit vs Sprite load time. Of course 100 files will take longer to load than 1 file!!! But how about 1 extra sprite vs Jammit? Here's a demo:
mmh... there are two kinds of overhead associated with using data URLs.
One is that encoding arbitrary binary data using base64 will increase the size of the data by around one third, so you will transmit more data.
Of course, if you do the caching headers right, this applies only to the first connection.
The other overhead you have to pay for every time: The data has to be base64 decoded before it can be used. Depending on the amount and size of images and depending on the type of implementation in the browser, this can require a significant amount of resources.
edit: In addition I wonder whether browsers cache the base64 decoded output between page loads. If you are really unlucky, the decoding might be done on every page load. Also, I'm not sure how browsers would handle identical data-content in multiple CSS rules: Do they keep multiple copies of the same image in memory? Or do they detect identical data and keep only one image around?
I would do some tests before blindly applying this.
Firefox keeps a copy of every image on screen decompressed in memory anyway, that's one of the downsides to using large sprites, they can take up a large amount of RAM. I doubt non-sprited base64 images compare in size issues.
Ideally, in my opinion, when the browser makes a request for example.com/something the server should send a continuous stream of data. It could work something like this:
1) The browser sends the request along with a list of files it already has cached (maybe a special type of cache that never expires so the browser would know that it wouldn't need that file ever again).
2) Then web server sends the actual HTML that needs to be rendered, followed by whatever files the browser may need (css, js, images, etc). The browser would know where the HTML block starts and where other pieces begin, so from its point of view it's the same as if it requested those pieces. The difference is that the server anticipates these requests and sends them along all in one batch. The very first piece would be a list of files the server is sending, so the browser would know what to expect. At the end of the stream, if there are any more files that are needed that haven't been included in the main stream, the browser would just request them as usual.
Doing something like this would effectively render most page accesses to a single request, going from 10 or more requests to a single request would have quite a bit of an impact. For this to work though, both the browser and server would need to support such a feature...
Yes. HTTP/1.1 arranged for this with pipelining 11 years ago.
But there were servers that thought they supported pipelining and had corruption issues, so the clients got scared and wouldn't use it. (Besides, the modems on the edge of the web were the problem, not the latency.) Then the proxy people said "Why bother? No one uses it.", and the clients continued to not implement it, or did but left it off by default with a switch to turn it on in a disused lavatory, behind the "Beware of the Leopard" sign. Meanwhile, the server people having run out of useful and useless features to add to their vast code bases actually got around to making pipelining work correctly.
Welcome to 2010.
• Most popular web servers support pipelining, probably correctly.
• <1% of browsers will try it.
• Proxies (firewall and caching) largely break it.
• If you are using scripts that can change the state of the web server, then your head might explode when you consider what happens with pipelined requests.
SPDY is definitely implemented in Chrome/Chromium, though I don't know if it's on by default. I believe at least some google services (google.com in particular) support it publicly as well.
You basically just described HTTP pipelining, which has been part of HTTP for over a decade.
The main problem is that somebody might have installed some broken proxy in the middle that doesn't understand HTTP pipelining. That is why browsers usually disable it by default. The second problem is that sometimes it is faster to open multiple parallel connections.
One downside that hasn't been mentioned so far, as far as I can tell: the conflict between caching and downloading unneeded image data.
If you use image 1 on pages A & B and not page C, but image 2 on pages A & C but not B, you face a dilemma.
- If you stuff it all in one big CSS file, visitors to B (C) will download image 2 (1), even though it's never shown.
- If you split the CSS in 2, with A including both and B & C including only one each, you'll have an extra request in A, and require duplication or a third file if B & C share style rules
- If each page serves its own CSS file, you have only 1 request but it (and the image data) won't be cached across pages.
This restricts this technique to images that are used all over the place or images that are tiny.
You would have exactly the same issue when applying CSS sprites... the only difference here is including the image data in the CSS file rather than a separate sprite file - removing the need for another HTTP request.
I believe stylesheets must be downloaded first before any further rendering can be done. Therefore you block rendering until all images have finished downloading, while a traditional external image sprite can be downloaded asynchronously.
I imagine you could put your base64 encoded images in a separate CSS file to solve this, but then you are back up to one extra HTTP request like CSS sprites.
I was going to make this point on the OP but my comment wasn't posted (yet?)
There are ways around it (define all your background images at the end of the CSS file for example), but I think it's a bit early to be declaring data-urls the holy grail of page performance.
I'll add... it really makes sense for mobile/webkit where you can depend a lot more on CSS3 and most images are small icons or tiles. I've seen Apple use it a lot in their mobile web apps.
And the ie6/7 stuff isn't much to worry about, either - you can just use conditional comments to include a traditional CSS Sprite-based file if ie6/7 is detected.
Yes, it means more overhead, maintaining two versions of your CSS file, but the improvements in speed might be worth it.
I seriously doubt it. You only remove 1 extra request with this technique. The main advantage is that you don't have to maintain a sprite image (but using this without some tool would also be cumbersome).
IMHO one saved request isn't work doing the same thing twice. And MHTML (http://www.phpied.com/mhtml-when-you-need-data-uris-in-ie7-a...) doesen't come as a saviour either, as you have to duplicate all images in the CSS, which increases the size of the CSS significantly (if it doesn't you are probably better off using "old fashioned" techniques.
What CWIZO said. But I wish to emphasize that this is a great example of just how nice it would be in a world without ie6/7. I got excited reading about this technique, but the maintenance work associated with including ie hacks makes the technique, for the time being, basically worthless.
Mobile browsers tend to do less caching, won't cache larger files, and don't caches files for as long. Doing an end-run around their image caching seems prone to cause problems.
Has anybody tried this technique on a mobile browser, say Mobile Safari on iPhone, to see how it stacks up?
Is anyone else but me worried about the legibility of the CSS with Data URIs? I mean, I can see how it can be useful, but I think the code is more readable with "spritesheet.png" than with:
iVBORw0KGgoAAAANSUhEUgAAAAoAAAAKCAYAAACNMs+9AAAABGdBTUEAALGP
C/xhBQAAAAlwSFlzAAALEwAACxMBAJqcGAAAAAd0SU1FB9YGARc5KB0XV+IA
AAAddEVYdENvbW1lbnQAQ3JlYXRlZCB3aXRoIFRoZSBHSU1Q72QlbgAAAF1J
REFUGNO9zL0NglAAxPEfdLTs4BZM4DIO4C7OwQg2JoQ9LE1exdlYvBBeZ7jq
ch9//q1uH4TLzw4d6+ErXMMcXuHWxId3KOETnnXXV6MJpcq2MLaI97CER3N0
vr4MkhoXe0rZigAAAABJRU5ErkJggg==
Now if only there was a way to define a data URI in a similar manner to @font-face. Maybe @data-object or something like that.
I certainly wouldn't write it inline in the CSS. I'd use a preprocessor to keep the style itself readable and have the base64 inserted for the deployment version.
I personally like Sass, but for this purpose CPP or a knocked-together Ruby script would be just as good.
It would be useful to see a matrix of browser/version support for data URI's embedded in external stylesheets along with anticipated support in future versions.
With data URIs, using the same image in two different CSS rules will result in two instances of the base 64 representation of the image in your CSS file. There are workarounds...none are pretty though. MHTML does not have this problem. In fact, in some ways I think MHTML is a more elegeant format for asset delivery.
As I remember when a sprite was a hardware assisted moving image on a screen, I really dislike how this term has been adopted. The fact that it uses a section of a large image does not make it a 'sprite'.
A sprite is just a 2D image. Animation in old 2D games would usually store several frames of animation in a single image, similar to the current use in CSS.
[+] [-] jashkenas|15 years ago|reply
http://documentcloud.github.com/jammit/
The big advantage here over spriting is not having to deal with the complexities of trying to jam repeated background patterns into sprites, and having to compute pixel offsets for your CSS rules. You can just write your CSS as you normally would, and have the plugin generate appropriate versions of your stylesheets before deploying.
If you'd like to see a demonstration of the speed difference it can make, here's a page that loads 100 small individual images:
http://jashkenas.s3.amazonaws.com/misc/jammit_example/normal...
And here's the same page, using a single CSS file with data-URIs for all the images.
http://jashkenas.s3.amazonaws.com/misc/jammit_example/jammit...
[+] [-] steveklabnik|15 years ago|reply
[+] [-] ccollins|15 years ago|reply
http://autocomplete.s3.amazonaws.com/sprite_example.html
[+] [-] pilif|15 years ago|reply
One is that encoding arbitrary binary data using base64 will increase the size of the data by around one third, so you will transmit more data.
Of course, if you do the caching headers right, this applies only to the first connection.
The other overhead you have to pay for every time: The data has to be base64 decoded before it can be used. Depending on the amount and size of images and depending on the type of implementation in the browser, this can require a significant amount of resources.
edit: In addition I wonder whether browsers cache the base64 decoded output between page loads. If you are really unlucky, the decoding might be done on every page load. Also, I'm not sure how browsers would handle identical data-content in multiple CSS rules: Do they keep multiple copies of the same image in memory? Or do they detect identical data and keep only one image around?
I would do some tests before blindly applying this.
[+] [-] pmjordan|15 years ago|reply
Compared to decompressing the PNG or JPEG data, decoding base64 is essentially free. It will therefore also not matter much at what stage it's cached.
[+] [-] CWIZO|15 years ago|reply
[+] [-] bkrausz|15 years ago|reply
See the discussion on http://blog.mozilla.com/webdev/2009/03/27/css-spriting-tips/
[+] [-] pbz|15 years ago|reply
1) The browser sends the request along with a list of files it already has cached (maybe a special type of cache that never expires so the browser would know that it wouldn't need that file ever again).
2) Then web server sends the actual HTML that needs to be rendered, followed by whatever files the browser may need (css, js, images, etc). The browser would know where the HTML block starts and where other pieces begin, so from its point of view it's the same as if it requested those pieces. The difference is that the server anticipates these requests and sends them along all in one batch. The very first piece would be a list of files the server is sending, so the browser would know what to expect. At the end of the stream, if there are any more files that are needed that haven't been included in the main stream, the browser would just request them as usual.
Doing something like this would effectively render most page accesses to a single request, going from 10 or more requests to a single request would have quite a bit of an impact. For this to work though, both the browser and server would need to support such a feature...
[+] [-] jws|15 years ago|reply
But there were servers that thought they supported pipelining and had corruption issues, so the clients got scared and wouldn't use it. (Besides, the modems on the edge of the web were the problem, not the latency.) Then the proxy people said "Why bother? No one uses it.", and the clients continued to not implement it, or did but left it off by default with a switch to turn it on in a disused lavatory, behind the "Beware of the Leopard" sign. Meanwhile, the server people having run out of useful and useless features to add to their vast code bases actually got around to making pipelining work correctly.
Welcome to 2010.
• Most popular web servers support pipelining, probably correctly.
• <1% of browsers will try it.
• Proxies (firewall and caching) largely break it.
• If you are using scripts that can change the state of the web server, then your head might explode when you consider what happens with pipelined requests.
[+] [-] TimothyFitz|15 years ago|reply
SPDY is definitely implemented in Chrome/Chromium, though I don't know if it's on by default. I believe at least some google services (google.com in particular) support it publicly as well.
[+] [-] briansmith|15 years ago|reply
The main problem is that somebody might have installed some broken proxy in the middle that doesn't understand HTTP pipelining. That is why browsers usually disable it by default. The second problem is that sometimes it is faster to open multiple parallel connections.
[+] [-] alanh|15 years ago|reply
Some zooming browsers also mess up sprites on the edges at certain zoom levels.
[+] [-] pmjordan|15 years ago|reply
If you use image 1 on pages A & B and not page C, but image 2 on pages A & C but not B, you face a dilemma.
- If you stuff it all in one big CSS file, visitors to B (C) will download image 2 (1), even though it's never shown.
- If you split the CSS in 2, with A including both and B & C including only one each, you'll have an extra request in A, and require duplication or a third file if B & C share style rules
- If each page serves its own CSS file, you have only 1 request but it (and the image data) won't be cached across pages.
This restricts this technique to images that are used all over the place or images that are tiny.
[+] [-] dedward|15 years ago|reply
[+] [-] Jim_Neath|15 years ago|reply
http://documentcloud.github.com/jammit/
[+] [-] jeff18|15 years ago|reply
[+] [-] mcav|15 years ago|reply
[+] [-] jra101|15 years ago|reply
[+] [-] andrewingram|15 years ago|reply
There are ways around it (define all your background images at the end of the CSS file for example), but I think it's a bit early to be declaring data-urls the holy grail of page performance.
[+] [-] cheald|15 years ago|reply
[+] [-] superk|15 years ago|reply
[+] [-] superk|15 years ago|reply
[+] [-] chriseppstein|15 years ago|reply
[+] [-] DanHulton|15 years ago|reply
And the ie6/7 stuff isn't much to worry about, either - you can just use conditional comments to include a traditional CSS Sprite-based file if ie6/7 is detected.
Yes, it means more overhead, maintaining two versions of your CSS file, but the improvements in speed might be worth it.
[+] [-] CWIZO|15 years ago|reply
IMHO one saved request isn't work doing the same thing twice. And MHTML (http://www.phpied.com/mhtml-when-you-need-data-uris-in-ie7-a...) doesen't come as a saviour either, as you have to duplicate all images in the CSS, which increases the size of the CSS significantly (if it doesn't you are probably better off using "old fashioned" techniques.
[+] [-] powrtoch|15 years ago|reply
Such a shame.
[+] [-] angelbob|15 years ago|reply
Has anybody tried this technique on a mobile browser, say Mobile Safari on iPhone, to see how it stacks up?
[+] [-] benbeltran|15 years ago|reply
Now if only there was a way to define a data URI in a similar manner to @font-face. Maybe @data-object or something like that.
[+] [-] chc|15 years ago|reply
I personally like Sass, but for this purpose CPP or a knocked-together Ruby script would be just as good.
[+] [-] mmaunder|15 years ago|reply
[+] [-] kwamenum86|15 years ago|reply
[+] [-] code_duck|15 years ago|reply
[+] [-] jra101|15 years ago|reply
[+] [-] tlrobinson|15 years ago|reply
[+] [-] unknown|15 years ago|reply
[deleted]