top | item 9388325

Loading 180 tiled images with HTTP/2 vs. HTTP/1

71 points| mohamedattahri | 11 years ago |http2.golang.org | reply

54 comments

order
[+] jewel|11 years ago|reply
Note that this means that there's no longer a pressing need to consolidate multiple javascript files into one file, likewise with CSS files. Icon sprites aren't necessary either.

There are still some gains from combining the files, as it will mean less headers will go over the wire, and you'd still want to minify them so that there were less total bits, but the gains aren't going to be as big as they used to be.

[+] Kudos|11 years ago|reply
This is one of the more exciting parts for me.

At Udemy ~90% of our visitors have SPDY3 or better (tech-savvy audience). Our JS needs an overhaul and the timing is great for a rethink on how we deliver it.

We're looking at optimising for those users and no longer bundling our JS and CSS. We need to test it further because we can't simply flick a switch and flick it back if it doesn't work out, we'll need to move away from Fastly to somewhere that has SPDY first.

[+] mohamedattahri|11 years ago|reply
It's a good thing that all headers are now compressed and many are only sent once. So I'm not sure more headers is what could hurt.

I'm curious as to how javascript engines perform with one hundred files multiplexed VS one giant file.

[+] andrewstuart2|11 years ago|reply
I personally still think that JS should still always be pre-processed.

When you can spend longer up front optimizing the concatenation, minification, and compression, you can amortize that additional cost across every request and still make important incremental gains.

A bit saved is a bit earned.

[+] ZeroGravitas|11 years ago|reply
There's not many benefits remaining to combining files. I'm sure the new best practice will get worked out in detail but e.g. you can update one js/css/image file but regular users will still have all the rest in their cache, people who only ever use one part of your site may never need to load some js/CSS/image files at all (reducing both load and parse time as well as memory), files you know you'll need can be prioritized or pre-cached rather than combined like we used to do for image rollover effects.
[+] r1ch|11 years ago|reply
Unfortunately most spdy / HTTP2 implementations mandate TLS. This means websites that rely on ad revenue cannot benefit from these improvements without losing revenue. I'd really love to use spdy / HTTP2 but ad revenue is more important, so I'll be continuing to sprite for the near future.

Given how awful ad serving architecture is these days for smaller publishers (chains of javascripts and iframes), I don't anticipate this being fixed for quite a while.

[+] nickspacek|11 years ago|reply
Similarly to GZIP applied by the Web Server, you could build your minification in, although given that the approaches are more varied and less standard it might not be the greatest idea.
[+] codewithcheese|11 years ago|reply
[+] bsdetector|11 years ago|reply
Firefox Nightly:

  HTTPS 1.1: 22 ms latency, 2.83s load time
  HTTPS 2:   17 ms latency, 2.91s load time
Network panel in web developer tools says the second was actually fetched over HTTP/2 so it looks like the demo worked... just not as intended.

So HTTP/2 provides no performance benefit in this case. Although I do have pipelining turned on, and unlike the gopher tiles demo this server actually returns "Connection: Keep-Alive" header necessary for pipelining to be used so that might explain it; Microsoft Research did determine that SPDY and pipelining have essentially the same page load performance.

[+] mahouse|11 years ago|reply
The server seems to be overloaded right now.

Anyway, disable HTTPS Everywhere if you have it, since if you don't do it there will be no difference. :-)

[+] dcsommer|11 years ago|reply
The reverse schadenfreude here is great. All the haters on HN complained about HTTP/2 endlessly, and here we've given them a faster Internet anyway.
[+] ben_pr|11 years ago|reply
I am shocked at the differences. A picture is worth a thousand words.
[+] ZeroGravitas|11 years ago|reply
The Google Page speed service has a test page where you provide a URL and it runs it through the webpagetest.org service twice, to show before and after times (and other details) for all the various optimisation techniques it applies automatically.

https://developers.google.com/speed/pagespeed/service/tryit

One of these optimisations is the use of SPDY/HTTP2 I believe? (Actually seems like WebPageTest has some issues with HTTP2 currently, though it's at the top of their TODO list to fix them: https://github.com/WPO-Foundation/webpagetest/issues/20) It would be great if they (or someone else) provided this service but only changed the use of HTTP2 so that people could run this benchmark on their own sites, and people could get a comprehensive view on what kinds of sites would benefit from switching today (without even bothering to remove old optimisations like image spriting etc.)

Those sites that benefit without any change may be the first to move, as so many workflows are built around concatenation of files etc. and that workflow will still be needed for some percentage of users, probably for years to come, so an initial, easy win would be good to demonsrate. I'm hopeful that if you use SSL then HTTP/2 will always be faster, but it would be good to see some data on that.

[+] einrealist|11 years ago|reply
Interesting. If the tiles are cached by the browser, HTTP/2 is slower than version 1. Only for the first request HTTP/2 was tremendously faster than its predecessor.
[+] geoffreyvdb|11 years ago|reply
It seems to be offline now?
[+] mholt|11 years ago|reply
Seems to be having trouble with the all TLS handshakes from HN. Plaintext http connection works for me (but it's not http/2)
[+] elchief|11 years ago|reply
It's not offline. There's just a long latency...
[+] dsiegel2275|11 years ago|reply
Wouldn't a more fair comparison involve domain sharding for the HTTP/1 impl?
[+] LaurentVB|11 years ago|reply
Not sure, isn't the whole point of pipelining to avoid these domain-sharding workarounds?

This test case shows that the performance is better with the same code. The invert test case would show that the code is more complex (because of sharding) for the same performance (actually not quite, because of multiple handshakes).

[+] mohamedattahri|11 years ago|reply
Good point, but domain sharding is just a clever optimization (hack) to emulate some of the parallelism provided by HTTP/2 out of the box.
[+] nfriedly|11 years ago|reply
I wouldn't say so. Domain sharding is a hack, and not every website implements it, so I think this is a fair comparison.

Adding a third option with domain sharing might be reasonable, but I would expect that http2 still has the best performance.

[+] invisible|11 years ago|reply
Yes and no. It would speed up the client receiving the images but at the detriment of flooding servers with connections (even moreso bad if HTTPS). I assume since this is on golang's site, this is also trying to show off the difference with just using the HTTP2 vs HTTP package.
[+] iotku|11 years ago|reply
Ran this a while ago on my Satellite connection [1], it's really exciting for users with high latency.

Satellite internet services have come a long way from what it used to be and has a much higher raw speed than before, but of course you can only do so much with latency when the signal is traveling such a distance.

[1]: https://www.youtube.com/watch?v=Ut-8ieRg1yE

[+] hayksaakian|11 years ago|reply
A Better idea for a demo:

Take the top 20 website on the web, and get their static assets from archive.org

Show me how much faster they load. I'm imagining sites like CNN or Yahoo which serve many images on their home page could load faster.

How much faster?

[+] _lce0|11 years ago|reply
I wonder how well will a reverse http/2-proxy, in front of a http/1-server will behave
[+] andrewstuart2|11 years ago|reply
It could be rather complex, but almost certainly will still help. It could, for example, heuristically process the script tags as HTML passes back through it to the client, make optimistic requests to the HTTP/1 server (presumably at a much lower latency than the client), then push those documents down to the client.

And of course, it could also cache that knowledge and do it all instantly.

[+] _lce0|11 years ago|reply
from here, with 1sec delay, 3 attempts, avg values, empty cache each time.. 38.7s vs 58.3s
[+] motoboi|11 years ago|reply
Could someone please explain what is going on in this demo?
[+] pfranz|11 years ago|reply
I'm guessing the submitter saw this talk at PyCon that used the page as illustration: Cory Benfield - Hyperactive: HTTP/2 and Python https://youtu.be/ACXVyvm5eTc

It's a new major version of HTTP. At the high level it's backwards compatible (status codes, urls, etc are the same), but the communication of those things changed. It's binary instead of plain text (a major point of contention), it's stateful instead of stateless (it can refer back to previous requests, which makes debugging harder when you jump in the middle of a communication), it can multiplex data (send multiple files concurrently--this is where the demo shines), adds a prioritization layer, and header compression. I may have some of those details wrong--all of that I learned from the talk.