I think it would be amazing for the CDNs, especially Amazon, to support HTTP2. But I've heard mostly silence from Amazon -- Cloudfront doesn't even support SPDY, which I though would have been useful already at this point.
From CloudFlare's perspective we currently support SPDY for all customers and will support HTTP/2 once NGINX with it becomes available.
We are committed to support for new protocols for all customers. We've rolled out SPDY, IPv6, HSTS, HTTPS (free certs) for all and are close to adding DNSSEC and will add HTTP/2. We're not waiting for these things to gain traction.
To give you an idea of what we are doing and the impact take a look at this chart of SPDY deployment.
Interestingly Amazon was one of the first non-Google companies to use SPDY. The Silk browser, first shipped in fall of 2011, includes an "acceleration feature." With it Amazon proxies your traffic and does a bunch of one the fly front-end optimizations (gzipping everything, lossless image optimization, giant shared caching proxy, basically a subset what mod_pagespeed can do). SPDY (and now presumably HTTP/2) is used between the tablet and Amazon's data centers.
How would a CDN take advantage of the features of HTTP? Will they be able to use server-push? or will using HTTP2 just create more overhead because clients will connect, upgrade, request, get file, close. rinse-wash-repeat?
> I think it would be amazing for the CDNs, especially Amazon, to support HTTP2.
The problem is that HTTP2 is basically a competitor, since one of the big value propositions of Cloudfront is that they fix a lot of the broken parts of HTTP. E.g. they achieve big performance improvements by keeping open persistent connections between the CDN and the webserver, which is an issue that HTTP2 ameliorates with multiplexing (since you no longer need a new connection for each static asset).
It'd also be nice if Microsoft shipped an easy-to-install plugin for IIS. I know, I know, they have this ... design ... where that handling is done by HTTP.SYS in the kernel. But their technical decisions like that shouldn't impact how they deliver updates.
Also, having a larger base of users that have browsers that support HTTP2 will make it a higher priority for devs to modify their servers to utilize HTTP2.
I'm reasonably up-to-date on HTTP/2, but there are a couple issues with request multiplexing and server push that I'm confused about:
- When the browser requests a page, and the server wants to push the CSS, JS, etc. along with it, what happens if the CSS/JS is cached already on the client?
- Additionally, how does a server like nginx know what to "push"? Are servers expected to parse the script and link tags from the outgoing html? Or is this controlled by the application (and if so, how)?
- The client can stop an incoming push (send reset on the pushed stream). Due to latency it might waste some bandwidth, but it's deemed to be an insignificant problem, since otherwise nothing else would be using the connection anyway.
- It's independent of the protocol. Each server can invent its own method. Personally I'm hoping servers/proxies will "upgrade" some HTTP/1.1 Link header to PUSH, e.g.:
As I've heard it explained, the server will push what it thinks is relevant [1] and if the resource is already cached in the client, the client will ask the server to stop the push.
Maybe the browser can send a header "Cache-Control: allowpush" that tells the server that the site is not in cache. The server can then push everything. I can't see how server push is useful without client cooperation, unless the resource changes dynamically.
I bet HTTP/1.0 is going to remain around for a long time too - HTTP is being used for far more purposes than to serve webpages, and especially with things like industrial equipment control/status there is little need to upgrade (and risk introducing new bugs) an existing working implementation.
Similar story on the .NET side: there's a UserVoice item for "Add support for ALPN to System.Net.Security.SslStream". It's on Page 12, just behind "Improve UI for 2015 Microsoft Test Manager Client User Experience".
Well, most of these projects are real open source projects, in which resources are somewhat sparse, and so new features and additions need to be prioritized. There is still some question regarding just how "good" http/2 really is. The complexity of http/2, as well as the fact that it really doesn't "fix" a lot of the problems in http/1.1, means that its universal applicability is still very much in question.
Even with buildpacks, there's essentially no way to use HTTP/2 or SPDY on Heroku until their HTTP router and load balancer stack supports it. Buildpacks can't really change that.
I am looking forward to wide-scale HTTP/2 adoption. It means you can take something simple like gRPC (http://www.grpc.io/) and suddenly scale it with commodity things like an HTTP load balancer.
No, you don't need a HTTP request to upgrade: it's done as part of the TLS negotiation with ALPN. No extra round trips, and TLS 1.3 will probably go even faster.
If you're not using TLS, despite things like the China QUANTUM attack on Baidu against Github, I don't know what to say to you, except most browsers already chose to refuse to speak HTTP/2 over cleartext, because using cleartext in 2015 is a bad idea in almost any scenario.
“However, the problems with HTTP2 are not because of HTTP2 itself, but because of the heavier costs of provisioning and maintaining healthy infrastructure that can afford to keep stateful long TCP/IP sessions by themselves.”
That's already been the case with HTTP 1 keep-alives and even more so with web sockets and unlike in the late 90s it's just not an issue for the vast majority of services.
That said, HTTP/2 also has no requirement that you keep a connection open after you're done with the request – the specification clearly states that either end can cleanly close the connection at any point. A browser might choose to implement something similar to the traditional keep-alive timer but there's no reason why you can't make a single request or close the connection immediately after a fetching a single round of resources. The only difference is that this process is both faster and more reliable than it was with HTTP 1 keep-alives & pipelining.
[+] [-] bhouston|11 years ago|reply
Details: http://nginx.com/blog/how-nginx-plans-to-support-http2/
I think it would be amazing for the CDNs, especially Amazon, to support HTTP2. But I've heard mostly silence from Amazon -- Cloudfront doesn't even support SPDY, which I though would have been useful already at this point.
[+] [-] jgrahamc|11 years ago|reply
We are committed to support for new protocols for all customers. We've rolled out SPDY, IPv6, HSTS, HTTPS (free certs) for all and are close to adding DNSSEC and will add HTTP/2. We're not waiting for these things to gain traction.
To give you an idea of what we are doing and the impact take a look at this chart of SPDY deployment.
http://w3techs.com/technologies/details/ce-spdy/all/all
There's a sudden increase in sites (a doubling) using SPDY. That's when CloudFlare have every single customer free HTTPS and SPDY.
[+] [-] billyhoffman|11 years ago|reply
Interestingly Amazon was one of the first non-Google companies to use SPDY. The Silk browser, first shipped in fall of 2011, includes an "acceleration feature." With it Amazon proxies your traffic and does a bunch of one the fly front-end optimizations (gzipping everything, lossless image optimization, giant shared caching proxy, basically a subset what mod_pagespeed can do). SPDY (and now presumably HTTP/2) is used between the tablet and Amazon's data centers.
[+] [-] jimktrains2|11 years ago|reply
[+] [-] Alex3917|11 years ago|reply
The problem is that HTTP2 is basically a competitor, since one of the big value propositions of Cloudfront is that they fix a lot of the broken parts of HTTP. E.g. they achieve big performance improvements by keeping open persistent connections between the CDN and the webserver, which is an issue that HTTP2 ameliorates with multiplexing (since you no longer need a new connection for each static asset).
[+] [-] vbtechguy|11 years ago|reply
[+] [-] MichaelGG|11 years ago|reply
[+] [-] latro12|11 years ago|reply
[+] [-] strommen|11 years ago|reply
- When the browser requests a page, and the server wants to push the CSS, JS, etc. along with it, what happens if the CSS/JS is cached already on the client?
- Additionally, how does a server like nginx know what to "push"? Are servers expected to parse the script and link tags from the outgoing html? Or is this controlled by the application (and if so, how)?
[+] [-] pornel|11 years ago|reply
- It's independent of the protocol. Each server can invent its own method. Personally I'm hoping servers/proxies will "upgrade" some HTTP/1.1 Link header to PUSH, e.g.:
[+] [-] maggit|11 years ago|reply
The relevant mechanism seems to be described here: https://http2.github.io/http2-spec/#PushResponses
[1]: By some criteria. I don't have details here.
[+] [-] FooBarWidget|11 years ago|reply
[+] [-] userbinator|11 years ago|reply
[+] [-] timme|11 years ago|reply
In future posts of similar nature I suggest clarifying visualizations with labels, or moving the legend much closer to the chart.
[+] [-] needusername|11 years ago|reply
OTOH in don't expect enterprise snake oil to support HTTP/2 anytime soon.
[1] http://lists.jboss.org/pipermail/wildfly-dev/2015-January/00... [2] http://eclipse.org/jetty/documentation/current/alpn-chapter....
[+] [-] strommen|11 years ago|reply
http://visualstudio.uservoice.com/forums/121579-visual-studi...
[+] [-] teamhappy|11 years ago|reply
[+] [-] jimjag|11 years ago|reply
[+] [-] tibbon|11 years ago|reply
[+] [-] bgentry|11 years ago|reply
[+] [-] vkjv|11 years ago|reply
[+] [-] guilt|11 years ago|reply
http://blog.geog.co/post/111535045146/our-thoughts-on-http2
Also, is that article from the Author of cURL? We <3 cURL!
[+] [-] AlyssaRowan|11 years ago|reply
If you're not using TLS, despite things like the China QUANTUM attack on Baidu against Github, I don't know what to say to you, except most browsers already chose to refuse to speak HTTP/2 over cleartext, because using cleartext in 2015 is a bad idea in almost any scenario.
[+] [-] acdha|11 years ago|reply
https://http2.github.io/faq/#can-i-implement-http2-without-i...
This bit is also mostly wrong:
“However, the problems with HTTP2 are not because of HTTP2 itself, but because of the heavier costs of provisioning and maintaining healthy infrastructure that can afford to keep stateful long TCP/IP sessions by themselves.”
That's already been the case with HTTP 1 keep-alives and even more so with web sockets and unlike in the late 90s it's just not an issue for the vast majority of services.
That said, HTTP/2 also has no requirement that you keep a connection open after you're done with the request – the specification clearly states that either end can cleanly close the connection at any point. A browser might choose to implement something similar to the traditional keep-alive timer but there's no reason why you can't make a single request or close the connection immediately after a fetching a single round of resources. The only difference is that this process is both faster and more reliable than it was with HTTP 1 keep-alives & pipelining.
[+] [-] bagder|11 years ago|reply