(no title)
vasilvv | 1 year ago
> In addition to the low roundtrip time, the connections between your load balancer and application server likely have a very long lifetime, hence don’t suffer from TCP slow start as much, and that’s assuming your operating system hasn’t been tuned to disable slow start entirely, which is very common on servers.
A single HTTP/1.1 connection can only process one request at a time (unless you attempt HTTP pipelining), so if you have N persistent TCP connections to the backend, you can only handle N concurrent requests. Since all of those connections are long-lived and are sending at the same time, if you make N very large, you will eventually run into TCP congestion control convergence issues.
Also, I don't understand why the author believes HTTP/2 is less debuggable than HTTP/1; curl and Wireshark work equally well with both.
dgoldstein0|1 year ago
That said I still disagree with the article's conclusion: more connections means more memory so even within the same dc, there should be benefits of http2. And if the app server supports async processing, there's value in hitting it with concurrent requests to make the most of its hardware, and http1.1 head of line blocking really destroys a lot of possible perf gains when the response time is variable.
I suppose I haven't had a true bake off here though - so it's possible the effect of http2 in the data center is a bit more marginal than I'm imagining.
hansvm|1 year ago
littlecranky67|1 year ago