top | item 43199739

(no title)

vasilvv | 1 year ago

The article seems to make an assumption that the application backend is in the same datacenter as the load balancer, which is not necessarily true: people often put their load balancers at the network edge (which helps reduce latency when the response is cached), or just outsource those to a CDN vendor.

> In addition to the low roundtrip time, the connections between your load balancer and application server likely have a very long lifetime, hence don’t suffer from TCP slow start as much, and that’s assuming your operating system hasn’t been tuned to disable slow start entirely, which is very common on servers.

A single HTTP/1.1 connection can only process one request at a time (unless you attempt HTTP pipelining), so if you have N persistent TCP connections to the backend, you can only handle N concurrent requests. Since all of those connections are long-lived and are sending at the same time, if you make N very large, you will eventually run into TCP congestion control convergence issues.

Also, I don't understand why the author believes HTTP/2 is less debuggable than HTTP/1; curl and Wireshark work equally well with both.

discuss

order

dgoldstein0|1 year ago

I think the more common architecture is for edge network to terminate SSL, and then transmit to the load balancer which is actually in the final data center? In which case you can http2 or 3 on both those hops without requiring it on the application server.

That said I still disagree with the article's conclusion: more connections means more memory so even within the same dc, there should be benefits of http2. And if the app server supports async processing, there's value in hitting it with concurrent requests to make the most of its hardware, and http1.1 head of line blocking really destroys a lot of possible perf gains when the response time is variable.

I suppose I haven't had a true bake off here though - so it's possible the effect of http2 in the data center is a bit more marginal than I'm imagining.

hansvm|1 year ago

HTTP2 isn't free though. You don't have as many connections, but you do have to track each stream of data, making RAM a wash if TLS is non-existent or terminated outside your application. Moreover, on top of the branches the kernel is doing to route traffic to the right connections, you need an extra layer of branching in your application code and have to apply it per-frame since request fragments can be interleaved.

littlecranky67|1 year ago

TCP slow start is not an issue for load balancers, as operatings system cache the congestion window (cwnd) on a per-host basis, even after termination of all connections to that host. That is, next time a connection to the same backend host is created, the OS uses a higher initial congestion window (initcwnd) during slow start based on the previous cache value. It does not matter if the target backend host is in the same datacenter or not.