top | item 12098352

Real-world HTTP/2: 400GB of images per day

273 points| jhealy | 9 years ago |99designs.com.au | reply

70 comments

order
[+] rgbrenner|9 years ago|reply
Founder of NuevoCloud here. If I read this right, you guys used Cloudflare for http 2. So let me ask you this, when you did your comparison, were all of the images cached (ie: x-cache: hit) at the edge?

The reason I ask is because cloudflare, last I checked, still hasn't implemented http2's client portion. So when a file is not cached, it does this:

client <--http2--> edge node <--http 1.1--> origin server.

Http2 is only used for the short hop between the client and edge node.. then the edge node uses http 1.1 for the connection to the origin server, which may be thousands of miles away.

In other words, in your test, depending on the client location and the origin server location.. your test may have used http 1.1 for the majority of the distance.

If you guys want to rerun this test on our network, we use http2 everywhere... your test would look like this on our network:

client <--http2--> edge node (closest to client) <--http2--> edge node (closest to server) <--http2--> origin server.

So even if your origin server doesn't support http2, it'll only use http 1.1 over the short hop between your server and the closest edge node.

You're welcome to email me if you want to discuss details you don't want to post here.

Edit: I should also mention, that we use multiple http 2 connections between our edge nodes and between the edgenode and origin server... removing that bottleneck. So only the client <--> edge node is a single http 2 connection.

[+] xzyfer|9 years ago|reply
To the best of my knowledge you are correct about how CloudFlare works. For context this data was collected over the period about a month on real production pages with significant traffic.

The edges were well and truely primed.

[+] angry-hacker|9 years ago|reply
So does it matter for Cloudflare or affects my site's performance if I use http2 in the backend while behind Cloudflare?
[+] imaginenore|9 years ago|reply
But origin hits are relatively rare, so it doesn't matter that much that it's not http2.
[+] dpc_pw|9 years ago|reply
I did not do any real tests and I might be completely wrong etc. but it seems to me that http2 is going to perform poorly over wireless links like 3g.

With http1 one had N tcp connections, and with the way tcp slowly increases the bandwidth used, and rapidly decreases it when packet is lost, even if any packet were dropped (which will happen quite a lot on 3g) other tcp streams were not delayed, or blocked, and can even utilize the leftover bandwidth, yielded by the stream that lost the packet.

With http2 however there's one tcp connection, so dropped packets will cause under-utilization of the bandwidth. On top of that dropped packets will cause all frames after them, to be delayed in kernel receiving buffer until the dropped packet is retransmitted, while in http1 case they would be available at the app level right away.

HTTP2 being implemented on top of TCP always seemed like a weird choice. It should have been UDP, IMO. That's why network accelerators like PacketZoom make so much sense. Note: I work in PacketZoom, I did not do any in-depth research on HTTP2, and this is my opinion, not necessarily of the company.

[+] Lukasa|9 years ago|reply
This is a real worry, but as with all things the actual behaviour of H2 on lossy networks is more complex than that.

TCP's congestion control algorithms don't work that well when you have many TCP streams competing for the same bandwidth. This is because while packet loss is a property of the link, not an individual TCP stream, each packet loss event necessarily only affects one TCP stream. This means the others don't get the true feedback about the lossiness of the connection. This behaviour can lead to a situation where all of your TCP streams try to over-ramp.

A single stream generally behaves better on such a link: it's getting a much more complete picture of the world.

However, your HOL blocking concern is real. This is why QUIC is being worked on. In QUIC, each HTTP request/response is streamed independently over UDP, which gets the behaviour you're talking about here, while also maintaining an overall view of packet loss for rate limiting purposes.

[+] king_phil|9 years ago|reply
I worry about TCP window scaling (and full TCP windows) when only using one TCP connection. There is a good reason download managers use multiple connections to download one file, because depending on the latency the maximum transfer rate is capped because only so many TCP packets can be in flight simultaneously. I wonder if nobody ever thought about that... HTTP1/x solved that (more by chance) with multiple connections...
[+] amock|9 years ago|reply
Have you looked at QUIC? It seems like it addresses the problems with HTTP2 over TCP as well as providing some additional benefits like like speeding up the secure connection establishment.
[+] xzyfer|9 years ago|reply
Hey all, I'm the author of that blog post. I'll be floating around for a while, happy to answer any questions.
[+] xzyfer|9 years ago|reply
I'm out y'all. Thanks for all the support. Feel free to follow up on Twitter at @xzyfer
[+] krschultz|9 years ago|reply
Well done, a great combination of clear graphs + theories.
[+] wongarsu|9 years ago|reply
I don't think that the server is in charge of priorisation here. The server can do it, but there is no reason to push this responsibility onto the server when the browser can do it much better (for example the server can't know what's in the viewport).

I expect this will be quickly sorted out by more mature HTTP/2 implementations in browsers. Downloading every image at once is obviously a bad idea, and I expect such naive behaviour will soon be replaced by decent heuristics (even just downloading eight resources at once should be better in nearly all cases)

[+] foota|9 years ago|reply
I think the real solution hear is for the browser to be able to communicate some sort of priority to the server, without having to download a limited number of files at once.
[+] joobus|9 years ago|reply
One way to "solve" the time to visual completion would be to make all the images, but especially the larger images, progressive scan. For very large images, the difference in visual quality between 50% downloaded and 100% downloaded on most devices isn't noticeable, so the page would appear complete in half the time.
[+] xzyfer|9 years ago|reply
Totally. There are a bunch of ways to address the performance issue. As I alluded to at the end of the post there serious technology considerations when preprocessing so much image data.

We're currently looking at whether we can solve use IntersectionObserver for efficient lazy loading of images before the enter the viewport.

[+] MichaelGG|9 years ago|reply
If there's a way to tell it not to render until x% downloaded, sure. Otherwise slower connections see the low-q versions for a while and it can disconcerting. Either to some users or some PMs.
[+] b34r|9 years ago|reply
Doesn't that increase the file size though? They're looking at 3G load speeds, so any increase in file size is probably unwelcome.
[+] ashmud|9 years ago|reply
OTOH, progressive JPEGs tend to require much more memory to decode. I do not have specific numbers to cite. Only going off of anecdotal usage of image programs over the years (e.g., Java photo uploaders that choked on progressive JPEGs).
[+] mkj|9 years ago|reply
This seems to bode badly for CDNs versus own tuned servers, unless there's a way for origin websites to provide hints on response ordering?
[+] xzyfer|9 years ago|reply
There are discussions happening on how browsers can allow authors to resource prioritisation hints. I'm curious to see where it goes.

We'd ideally like to be able to say – "prioritise 10 images in the viewport". You hack it together relatively efficiently using IntersectionObserver now, but support isn't great.

[+] toast0|9 years ago|reply
CDNs are still going to have lower latency and higher bandwidth, and likely more ability to have long lived connections. Probably whatever mechanism develops to facilitate http/2 server pushed resources through a CDN will also include prioritization hints.
[+] hendry|9 years ago|reply
damn, CloudFront already doesn't expose many tunables, so yeah, this isn't going to work.

Starting to wish Appcache manifest actually was made to work and that could use used as a queue somehow to prioritise important assets on a Webpage.

[+] cagenut|9 years ago|reply
Did I read this right that http1 was with cdn A (unnamed?) and http2 was with cdn B (cloudflare)?

If so, you really can't draw any conclusions about the protocol difference when the pop locations, network designs, hardware and software configurations could easily have made the kinds of differences you're seeing.

[+] xzyfer|9 years ago|reply
You read it correct.

By not moving our render blocking assets like CSS, JS and fonts over to the http/2 we rule out performance changes due to improvements to head of line block.

Our images were always on a separate hostname so the DNS lookup over is the same. We also did some initial benchmarking and found the new CDN to be more efficient than the old one.

[+] thinkMOAR|9 years ago|reply
Comparing two protocols using different providers, isn't that a bit comparing pears and apples? And i have a doubt, which could be bad assumption, but that it is on hardware you control or own and what exactly runs on it, and potentially which other parties use it.
[+] jjcm|9 years ago|reply
Great writeup, and interesting seeing where http2 performed worse. Definitely going to refer to this as I update my backend to http2.
[+] xzyfer|9 years ago|reply
Thanks mate, really appreciate it.
[+] runeks|9 years ago|reply
I'm really looking forward to see how much HTTP/2 will increase performance for my Bitcoin payment channel server: https://github.com/runeksvendsen/restful-payment-channel-ser...

Just now I finished separating the front-end and back-end - by a RESTful protocol - and this roughly halved performance compared to using a native library (from ~2000 payments/second on my laptop to ~1000). I expect HTTP/2 to make a greater percentage-wise difference here, although I admit I really have no idea how much, say, ZeroMQ would have reduced performance, compared to cutting it in half using HTTP/1.x.

I expect HTTP/2 to make a much greater difference in high performance applications, where overhead becomes more important, which static file serving doesn't really hit. So I think RESTful backend servers will see a much more noticeable performance increase, especially since, if you use Chrome at least, as an end-user you already get many of the HTTP/2 latency benefits through SPDY.

[+] arca_vorago|9 years ago|reply
Aren't there more performant options than restful these days? Any particular reason why you didn't choose those?

Would that be a good case for websockets?

[+] manigandham|9 years ago|reply
Some solutions:

- Serve less data. The best speedup is when there's no more data to download and if the throughput for clients is maxed out, then decreasing page weight helps.

- Use async bootstrap JS code to load in other scripts once images are done loading or other page load events have fired.

- Load less images in parallel, use JS to load one row of images at a time.

- Use HTTP/2 push (which CloudFlare offers) to push some of the images/assets with any other response. Push images with the original HTML and you'll start getting the images to browser before it even parses the HTML and starts any (prioritized) requests.

[+] dalore|9 years ago|reply
Wouldn't the standard solution of lazy loading images (and prioritizing critical css) help. Since they are now trying to load everything on a big page, they should only be trying to load everything above the fold.
[+] pedalpete|9 years ago|reply
We've recently moved to Google Cloud Storage from AWS because of http/2. We had a bottleneck of the browser waiting when serving multiple large (8+files * 10mb+each).

I'm wondering if 99designs looked at any sort of domain sharding to get around the timing issues. If I understand correctly, wouldn't this get around the priority queue issue? Your js,fonts, etc. coming from a different address than your larger images, would create completely separate connections.

I'm not completely sure this would get around the issues mentioned, but I'm curious if it was looked at as a solution.

[+] xzyfer|9 years ago|reply
The priority queue isn't the issue. In fact the priority queue is what kept our first paint times tanking because browsers prioritised render blocking resources instead of images.

The issue was due to the variance of image size. An image that is significantly larger than the page average will be loaded slower since all images get an equal share of bandwidth (priority). Adding sharding wouldn't help since the client only has a fixed amount of bandwidth to share and all images would still get the same share of it. Sharding could help if the bandwidth bottle neck was at the CDN but that's rarely going to be the case.

[+] dalore|9 years ago|reply
Domain sharding is anti pattern for http2. Reason being for another domain it needs to make an expensive TLS handshake. With http2 on the same domain, it doesn't. We've done tests and even moved away from even having a static domain.
[+] diegorbaquero|9 years ago|reply
Excellent and in-depth article. Thank you for sharing!

Hopefully we'll see a follow up with future changes and tweaks both from webservers and browsers.

[+] xzyfer|9 years ago|reply
Thanks mate, glad you enjoyed it.
[+] muteor|9 years ago|reply
I thought that HTTP/2 didn't fix head-of-line blocking and this was why QUIC (https://www.chromium.org/quic) existed.

From the project page:

Key features of QUIC over existing TCP+TLS+HTTP2 include

* Dramatically reduced connection establishment time

* Improved congestion control

* Multiplexing without head of line blocking

* Forward error correction

* Connection migration

[+] Matthias247|9 years ago|reply
IMHO HTTP/2 solves HOL blocking partly. It will allow other streams to proceed if one stream is blocked due to flow control (receiver doesn't read from the stream). E.g. if you have multiple parallel downloads over a single HTTP/2 connection one blocked/paused stream won't block the others.

However it doesn't have abilities that will allow individual streams to proceed if some packets are lost that only hold information for a single stream.

[+] noahcollins|9 years ago|reply
Thanks for posting your findings - very useful data. It would be interesting to see the Webpagetest waterfalls in greater detail if you're able to share that.

You planning to use your resource hints to enable server push at CDN edge?

[+] xzyfer|9 years ago|reply
Server push at the edge is problem atm. Current push semantics require the HTML document say which resources to push. That's an issue if you're serving assets off a CDN domain.

Asset domains make less since with h2 from a performance perspective but there are still security concerns that need to addressed.

[+] schallertd|9 years ago|reply
And still no OpenSSL 1.0.2 nor ALPN on most distros such as Debian Jessie... kinda sucks
[+] secure|9 years ago|reply
It’s available in jessie-backports since 2016-07-02, see https://packages.debian.org/jessie-backports/libssl-dev

Given that jessie is stable, OpenSSL will not be updated to a newer version, only security updates will be made available.

What I’m trying to say: you’re never going to get it on jessie, unless you enable backports, at which point you’ll have it readily available.

Hope that helps

[+] mmel|9 years ago|reply
This is petty and inconsequential, but I really wish they used a section header called "Conclusion" instead of "Take Aways".