top | item 9295407

The state and rate of HTTP/2 adoption

87 points| kryptiskt | 11 years ago |daniel.haxx.se

52 comments

order
[+] bhouston|11 years ago|reply
I think the floodgates will open once there is a http2 module for nginx. We are still on spdy/3 because of this.

Details: http://nginx.com/blog/how-nginx-plans-to-support-http2/

I think it would be amazing for the CDNs, especially Amazon, to support HTTP2. But I've heard mostly silence from Amazon -- Cloudfront doesn't even support SPDY, which I though would have been useful already at this point.

[+] jgrahamc|11 years ago|reply
From CloudFlare's perspective we currently support SPDY for all customers and will support HTTP/2 once NGINX with it becomes available.

We are committed to support for new protocols for all customers. We've rolled out SPDY, IPv6, HSTS, HTTPS (free certs) for all and are close to adding DNSSEC and will add HTTP/2. We're not waiting for these things to gain traction.

To give you an idea of what we are doing and the impact take a look at this chart of SPDY deployment.

http://w3techs.com/technologies/details/ce-spdy/all/all

There's a sudden increase in sites (a doubling) using SPDY. That's when CloudFlare have every single customer free HTTPS and SPDY.

[+] billyhoffman|11 years ago|reply
> But I've heard mostly silence from Amazon

Interestingly Amazon was one of the first non-Google companies to use SPDY. The Silk browser, first shipped in fall of 2011, includes an "acceleration feature." With it Amazon proxies your traffic and does a bunch of one the fly front-end optimizations (gzipping everything, lossless image optimization, giant shared caching proxy, basically a subset what mod_pagespeed can do). SPDY (and now presumably HTTP/2) is used between the tablet and Amazon's data centers.

[+] jimktrains2|11 years ago|reply
How would a CDN take advantage of the features of HTTP? Will they be able to use server-push? or will using HTTP2 just create more overhead because clients will connect, upgrade, request, get file, close. rinse-wash-repeat?
[+] Alex3917|11 years ago|reply
> I think it would be amazing for the CDNs, especially Amazon, to support HTTP2.

The problem is that HTTP2 is basically a competitor, since one of the big value propositions of Cloudfront is that they fix a lot of the broken parts of HTTP. E.g. they achieve big performance improvements by keeping open persistent connections between the CDN and the webserver, which is an issue that HTTP2 ameliorates with multiplexing (since you no longer need a new connection for each static asset).

[+] MichaelGG|11 years ago|reply
It'd also be nice if Microsoft shipped an easy-to-install plugin for IIS. I know, I know, they have this ... design ... where that handling is done by HTTP.SYS in the kernel. But their technical decisions like that shouldn't impact how they deliver updates.
[+] latro12|11 years ago|reply
Also, having a larger base of users that have browsers that support HTTP2 will make it a higher priority for devs to modify their servers to utilize HTTP2.
[+] strommen|11 years ago|reply
I'm reasonably up-to-date on HTTP/2, but there are a couple issues with request multiplexing and server push that I'm confused about:

- When the browser requests a page, and the server wants to push the CSS, JS, etc. along with it, what happens if the CSS/JS is cached already on the client?

- Additionally, how does a server like nginx know what to "push"? Are servers expected to parse the script and link tags from the outgoing html? Or is this controlled by the application (and if so, how)?

[+] pornel|11 years ago|reply
- The client can stop an incoming push (send reset on the pushed stream). Due to latency it might waste some bandwidth, but it's deemed to be an insignificant problem, since otherwise nothing else would be using the connection anyway.

- It's independent of the protocol. Each server can invent its own method. Personally I'm hoping servers/proxies will "upgrade" some HTTP/1.1 Link header to PUSH, e.g.:

    Link: </style.css>; rel=preload
[+] maggit|11 years ago|reply
As I've heard it explained, the server will push what it thinks is relevant [1] and if the resource is already cached in the client, the client will ask the server to stop the push.

The relevant mechanism seems to be described here: https://http2.github.io/http2-spec/#PushResponses

[1]: By some criteria. I don't have details here.

[+] FooBarWidget|11 years ago|reply
Maybe the browser can send a header "Cache-Control: allowpush" that tells the server that the site is not in cache. The server can then push everything. I can't see how server push is useful without client cooperation, unless the resource changes dynamically.
[+] userbinator|11 years ago|reply
I bet HTTP/1.0 is going to remain around for a long time too - HTTP is being used for far more purposes than to serve webpages, and especially with things like industrial equipment control/status there is little need to upgrade (and risk introducing new bugs) an existing working implementation.
[+] timme|11 years ago|reply
Good post, but that pie chart, man.

In future posts of similar nature I suggest clarifying visualizations with labels, or moving the legend much closer to the chart.

[+] needusername|11 years ago|reply
Java support for APLN [1] requires replacing JDK classes [2]. That may be fixed with Java SE 9. Which means EE 9 for the servlet API.

OTOH in don't expect enterprise snake oil to support HTTP/2 anytime soon.

[1] http://lists.jboss.org/pipermail/wildfly-dev/2015-January/00... [2] http://eclipse.org/jetty/documentation/current/alpn-chapter....

[+] teamhappy|11 years ago|reply
Kinda sad to hear vendors say they'll support it once it gets traction. How is it supposed to get traction if it's not available to users?
[+] jimjag|11 years ago|reply
Well, most of these projects are real open source projects, in which resources are somewhat sparse, and so new features and additions need to be prioritized. There is still some question regarding just how "good" http/2 really is. The complexity of http/2, as well as the fact that it really doesn't "fix" a lot of the problems in http/1.1, means that its universal applicability is still very much in question.
[+] tibbon|11 years ago|reply
I really wish Heroku would support http2/spdy easily without the use of clunky buildpacks that I never really trust fully in production.
[+] bgentry|11 years ago|reply
Even with buildpacks, there's essentially no way to use HTTP/2 or SPDY on Heroku until their HTTP router and load balancer stack supports it. Buildpacks can't really change that.
[+] vkjv|11 years ago|reply
I am looking forward to wide-scale HTTP/2 adoption. It means you can take something simple like gRPC (http://www.grpc.io/) and suddenly scale it with commodity things like an HTTP load balancer.
[+] guilt|11 years ago|reply
We've decided that HTTP/2 may not be a great fit at its present state, and here's why we thought so:

http://blog.geog.co/post/111535045146/our-thoughts-on-http2

Also, is that article from the Author of cURL? We <3 cURL!

[+] AlyssaRowan|11 years ago|reply
No, you don't need a HTTP request to upgrade: it's done as part of the TLS negotiation with ALPN. No extra round trips, and TLS 1.3 will probably go even faster.

If you're not using TLS, despite things like the China QUANTUM attack on Baidu against Github, I don't know what to say to you, except most browsers already chose to refuse to speak HTTP/2 over cleartext, because using cleartext in 2015 is a bad idea in almost any scenario.

[+] acdha|11 years ago|reply
The bit about HTTP 1 is only true without TLS:

https://http2.github.io/faq/#can-i-implement-http2-without-i...

This bit is also mostly wrong:

“However, the problems with HTTP2 are not because of HTTP2 itself, but because of the heavier costs of provisioning and maintaining healthy infrastructure that can afford to keep stateful long TCP/IP sessions by themselves.”

That's already been the case with HTTP 1 keep-alives and even more so with web sockets and unlike in the late 90s it's just not an issue for the vast majority of services.

That said, HTTP/2 also has no requirement that you keep a connection open after you're done with the request – the specification clearly states that either end can cleanly close the connection at any point. A browser might choose to implement something similar to the traditional keep-alive timer but there's no reason why you can't make a single request or close the connection immediately after a fetching a single round of resources. The only difference is that this process is both faster and more reliable than it was with HTTP 1 keep-alives & pipelining.

[+] bagder|11 years ago|reply
Yes it is indeed by the author of curl! (me ;-)