No, http2 is not better. We actually did the tests. We're not quite Google-scale, but having to handle tens of thousands of requests per second put us in the 'high load' camp.
The hidden danger, mentioned in the article, is if the client sends a second request while the server closes an idle connection. Until http/2, the client can't tell if the server closed the connection before or after it received the request. Many servers send a hint about the idle time out, but few client libraries process it (that I've seen). The larger the latency between server and client, the bigger deal this is.
This is always an issue: you send an HYTP POST request (even on http/0.9) - and connection closes before you saw a response. Did the server receive it? You don’t know.
Pipelining might amplify it, but it is always there, especially with unreliable mobile connections.
This is only partially true. http/1.1 has well defined semantics for persistent connections. The server can send the header "Connection: Close" to indicate to the client it is closing the idle connection. All http/1.1 clients should respect that since it's in the RFC.
The problem is many servers don't send this header when closing idle connections. nginx is a notorious example. But well behaving servers should be sending that header if they intend to close the connection after a request.
So I've been writing a web server recently (mostly for learning purposes of HTTP and web stuff again as I've been out of that field for over a decade), and I've discovered that Firefox seems to be the only browser I've tried which still really utilises it fully and respects the headers: Safari sort of does, but only ever once, regardless of the header params, and Chrome never does, again, regardless of what the Connection header item says in the first response from the server.
These tickets talk about pipelining, not keep-alive. The chrome link specifically points out why they decided to disable it. Chrome and Firefox still support regular keep-alive and do reuse connections on http 1.1
According to https://tools.ietf.org/html/rfc2068#section-19.7.1.1 HTTP/1.1 defines the "Keep-Alive" header for use with keep-alive parameters but does not actually define any keep-alive parameters. I don't see a definition of this header in any of the RFCs that obsolete this one (just some mentions of issues with persistent connections and HTTP/1.0 servers, and the fact that Keep-Alive is a hop-by-hop header).
A common mistake I see when people use libcurl is to create a new handle for each request, which pretty much guarantees no connection reuse. Reuse your handles for profit.
One more reason to do so: if you use client side TLS auth with a cert (or moreover a slow smartcard,) reauthenticating every connection will grind your performance to pieces.
What's the advice these days for when things are behind a reverse proxy or a load balancer, or both? I ran into issues where things end up balancing unevenly when multiple things are mixed.
Assuming the client and server HTTP implementations don't have any bugs, and there aren't any network devices (proxies, etc.) in between, sure. Are modern clients able to start at HTTP/2, and gracefully degrade through HTTP/1.1 with keep-alive down to HTTP/1.0 if necessary? That would be really cool if so.
But the fact that the underlying HTTP connection is kept-alive by default doesn't necessary mean that the client is going to actually re-use that connection for multiple HTTP requests. And, in fact, in Node.js the connection is not reused by default.
Or better yet, use gzip and inline all images as base64 encoded. The file size is very similar to raw data, and the number of requests with associated http headers is reduced.
That has some major downsides. No caching, so any shared images need to be re-downloaded on every page. And the size of base64 if about 40% larger in my experience.
Request count is really not a big deal with HTTP/2 multiplexing.
Don't do this. Browsers are very optimized for subrequests and especially parsing image data.
By forcing base64, you're eliminating all the caching and using much more CPU power to parse that back into a binary image. You're also making the page load slower as the initial payload is bigger and image data has to be handled in line rather than asynchronously.
Might be a good idea for statically generated pages, or the statically generated parts of pages, but I’d be wary of adding any more load to dynamic pages.
Do any static site generators rewrite image links as data URIs?
Unfortunately most people's primary computing devices are smart phones and smart phone radios cannot keep a TCP connection open, or won't, because of power usage.
While this might be true for users connection to services there are also a lot of intra-serices connections that can keep TCP connections open and greatly benefit from doing so. For example, a microservice architecture where you have APIs communicating through HTTP with each other.
Keeping connection open does not require any action. Also radios have nothing to do with connections. Connections are abstractions from a different layer. Radios are shut down much more frequently than you'd think, to save power.
I thought the primary use case for this is you could load all of your websites resources on a single connection so you can make 100 file requests at the start and it only makes one connection.
You send heartbeats! There might be a max-connection-time but I haven't run into it, my connections being dropped through amazon infrastructure was solved by sending a few bytes (': <3' or '<!-- <3 -->') every 5 seconds or so.
[+] [-] hkolk|7 years ago|reply
[+] [-] baroffoos|7 years ago|reply
[+] [-] abledon|7 years ago|reply
[+] [-] papageek|7 years ago|reply
[+] [-] dana321|7 years ago|reply
[+] [-] merb|7 years ago|reply
but stateful connection management comes with a cost, especially on tcp.
[+] [-] otabdeveloper2|7 years ago|reply
No, http2 is not better. We actually did the tests. We're not quite Google-scale, but having to handle tens of thousands of requests per second put us in the 'high load' camp.
[+] [-] toast0|7 years ago|reply
[+] [-] beagle3|7 years ago|reply
Pipelining might amplify it, but it is always there, especially with unreliable mobile connections.
[+] [-] nvarsj|7 years ago|reply
The problem is many servers don't send this header when closing idle connections. nginx is a notorious example. But well behaving servers should be sending that header if they intend to close the connection after a request.
[+] [-] gopher|7 years ago|reply
[+] [-] berkut|7 years ago|reply
https://www.chromium.org/developers/design-documents/network...
shows they removed it in Chrome due to issues, and weirdly this Firefox ticket:
https://bugzilla.mozilla.org/show_bug.cgi?id=264354
(last updated 5 years ago), seems to show they're not going to enable keep alive for HTTP 1.1, but Firefox is most definitely utilising it.
[+] [-] iforgotpassword|7 years ago|reply
[+] [-] nvarsj|7 years ago|reply
The "Keep-Alive" header was something tacked onto http/1.0 and doesn't really mean anything these days.
[+] [-] tyingq|7 years ago|reply
"Sending a 'Connection: keep-alive' will notify Node.js that the connection to the server should be persisted until the next request."
The article seems to confirm this behavior.
So clients have to account for non RFC compliant servers.
[+] [-] cetra3|7 years ago|reply
https://stackoverflow.com/questions/25372318/error-domain-ns...
[+] [-] eridius|7 years ago|reply
MDN's documentation on this header references https://tools.ietf.org/id/draft-thomson-hybi-http-timeout-01... for the parameters, but this is an experimental draft that expired in 2012.
Which is to say, I can't really fault Safari for not respecting keep-alive parameters that never made it out of the experimental draft phase.
[+] [-] JdeBP|7 years ago|reply
[+] [-] iforgotpassword|7 years ago|reply
[+] [-] baybal2|7 years ago|reply
[+] [-] dmarlow|7 years ago|reply
[+] [-] swizzler|7 years ago|reply
I've had apache refuse new requests because old connections were holding slots.
[+] [-] paulddraper|7 years ago|reply
The client would in the middle of sending a new request, but the server would have already decided to close the connection and the request would fail.
I believe this is a common problem, can and yet the spec has nothing to address this obvious race condition.
Right?
[+] [-] TYPE_FASTER|7 years ago|reply
[+] [-] spockz|7 years ago|reply
[+] [-] mgartner|7 years ago|reply
But the fact that the underlying HTTP connection is kept-alive by default doesn't necessary mean that the client is going to actually re-use that connection for multiple HTTP requests. And, in fact, in Node.js the connection is not reused by default.
[+] [-] robocat|7 years ago|reply
https://fastmail.blog/2011/06/28/http-keep-alive-connection-...
[+] [-] dana321|7 years ago|reply
[+] [-] SquareWheel|7 years ago|reply
Request count is really not a big deal with HTTP/2 multiplexing.
[+] [-] manigandham|7 years ago|reply
By forcing base64, you're eliminating all the caching and using much more CPU power to parse that back into a binary image. You're also making the page load slower as the initial payload is bigger and image data has to be handled in line rather than asynchronously.
[+] [-] mr_toad|7 years ago|reply
Do any static site generators rewrite image links as data URIs?
[+] [-] snek|7 years ago|reply
[+] [-] hidiegomariani|7 years ago|reply
[+] [-] jrochkind1|7 years ago|reply
The SSL connection init costs are real, although SSL session re-use can help there even without keep-alive.
[+] [-] mgartner|7 years ago|reply
[+] [-] unknown|7 years ago|reply
[deleted]
[+] [-] superkuh|7 years ago|reply
[+] [-] dmlittle|7 years ago|reply
[+] [-] ben174|7 years ago|reply
[+] [-] megous|7 years ago|reply
[+] [-] baroffoos|7 years ago|reply
[+] [-] sigjuice|7 years ago|reply
[+] [-] fabioyy|7 years ago|reply
[+] [-] forreal1126|7 years ago|reply
[+] [-] mh-|7 years ago|reply
[+] [-] jazzyjackson|7 years ago|reply