TCP is also concerned with fairness (after Van Jacobson's famous paper[1]). We've long known that you can batter data through the network at speed if you don't care about other users. How does QUIC preserve fairness?
Congestion control is largely orthogonal to transport protocol design. You can basically slap the general shape of any congestion control algorithm onto whatever transport protocol you want.
Their interaction is largely at the level of: “How easy does the protocol make it to estimate the data channel parameters?”, “What happens in the event of congestion-related failures (packet loss, delay, reorder, etc)?”, and “How do you efficiently recover and adapt to the new channel parameters?”
In all of these regards QUIC is quite a bit better than TCP.
It seems like a bug in the middlebox if it requires all end points to cooperate and doesn't throttle anyone.
I thought the point of congestion control was to slow down pointless sending when an overloaded middle box is dropping our packets and saying "buddy take the hint", not to actually yield to other traffic? The network tells us how much capacity we have, trying to push more would just increase our own stream's packet loss. ... Right?
Fairness? It seems noone gives a shit about it. Most consumers want their data now. I have ETTH 100Mbit (with is very fast to my standards) and I can tell the difference between day, evening and weekend just by pinging my nodes. RTT is stable offhours, but in normal day jitter kicks in. It sad to see RTT jump from 9 to 90ms over major IX. Everything seems to be overbooked horribly.
I know that Internet is best effort network (ATM, anyone remember it?) but Im sure I would preffer slower but more stable internet.
I don't think this article was very good and seems to be written by someone that doesn't really know what they're talking about, or by someone that does but "dumbed down" to a degree where it's conclusion inaccurate.
I think HTTP3 is probably doomed for a IPv6 like existence for a long while. While everyone claims that TCP is apparently "too slow", the vast majority of corporate/enterprise settings will just block it.
It seems like a technology built by the big players who want to shave a few cycles off each connection and save $millions rather than a practical standard.
Do I want to use HTTP3 at home? Yeah, sounds cool.
Will I be able to use it at work? Probably not for 5+ years.
If you use Chromium or Firefox, you may have used HTTP/3 and not realized. It happened to me!
https://ifconfig.net/ uses HTTP/3 by virtue of Cloudflare's proxy using it. I'm on Firefox 115, half a year old, and the network inspector says I connect to ifconfig over HTTP/3.
The rollout's been really smooth and quiet. Browsers do the same "happy eyeballs" optimization as they did for HTTP/2 and SPDY, racing QUIC with TCP and using which-ever one connects first, or maybe dropping TCP if QUIC connects a couple milliseconds later.
HTTP3 works fine. It's used by Cloudflare, which (unfortunately) means that half the websites you visit probably use it. It's available in nginx, caddy, and a bunch of other servers.
Sure, most servers will just support the protocol at the reverse proxy level and lack the optimisations that make HTTP3 faster for the big cloud providers that stand to gain most, but you can just set up HTTP3 on your own server if you want.
I don't know what you're using at work that prevents you from using HTTP3. I'm guessing you're referring to one of those awful middleboxes or some kind of firewall that blocks outgoing UDP traffic. Luckily, that stuff isn't relevant for most connections on the internet.
HTTP3 doesn't have the same chicken-or-egg problem as IPv6. It runs over UDP. IPv6 is a full protocol rev which takes a lot longer to accomplish. Once a large multi-vendor network is deployed revving the base protocol is really hard.
ipv6 is very, very widely deployed in mobile networks. Download some net tools and check out your phone's interface table.
Also v6 did a bunch of things right. NDP and link local addresses for v6 actually work. The fragmentation changes were 100% the right thing to do. v6 extensions like SR and uSID are both solid and consolidate a lot of stupid stuff into something clean.
v6 had a bad, rocky start. A lot of that was completely inadequate attention to the transition, and the vendors fucked up almost everything for years, on their own gear and interop, but at this point v6 is fine.
Would I launch a v6 only service? No, because at this point the SPs are the problem. Verizon as of a few years ago and maybe now still didn't do v6 to any of their home users. So many things like that.
HTTP/3 may have good speed for webpages that trigger many HTTP requests to source ads and support tracking/telemetry from multiple, disparate servers controlled by different entitites. (Makes sense since HTTP/3 was designed by an advertising company.) But HTTP/3 speed is not any better than HTTP/1.1 speed for downloading a file or a series of files from the same server in a single TCP connection. Some have said it's worse. (HTTP/1.1 was not designed by an advertising company so if it sucks for ads/tracking/telemetry then that makes perfect sense.)
Yes, it saves server resources at extreme scales. The client-side savings are purely theoretical. If their JS bloatware crap takes 2s to respond to a click, the 3x 30ms network roundtrip to fetch a resource with a new HTTPS connection hardly matters.
I don't think this article is very well-written. It gets confused about whether QUIC has a handshake or not (it does). And it conflates zero round-trip time with combining the TCP/TLS handshakes together.
I came to mention the same. Diagrams have redundant information, examples are badly picked, there's sentences with little to no value... I don't know if it's lack of care, the author not writing in his native language, or excess of GPT.
> QUIC works on top of UDP. It overcomes the limitations of TCP by using UDP. It’s just a layer or wrapper on top of UDP.
It's a bit perplexing that an article that make some claims about QUIC speed over TCP has exactly zero benchmarks, numbers, anything to back that up besides theory.
I could be inclined to believe it but I'd like to know by which factor, in which circumstances, with real examples and numbers.
I feel the main limitation over here is hardware optimisation support.
With TCP you have the congestion control algo baked in hardware, tcp segment and checksum offload. You can pass things directly to the NIC for massive latency, bandwidth and offloading processing away from the cpu.
A properly tuned system with a user space networking stack with ctpio, hardware offload and proper tuning beats the pants of QUIC for latency.
It is possible to get some of the same benefits I guess with GSO. In any case, the slow hardware support here I suspect is a bottleneck. You may not get much benefit given more layers are binary/encrypted and not visible to hardware.
The above is not relevant for hyperscalers like Google I imagine where they can make the hardware and the sheer amount of customers offer bandwidth benefits.
The internet itself is not properly tuned and distance is an issue. On a local lan, sure a TCP tuned system will do amazing. On the public internet where who knows where your packet is going it's an optimal but not tuned system with latency TCP wasn't designed for. Also a modern CPU with SIMD/AVX can handle a lot of traffic in user space. UDP also has a checksum in it's packet header.
With TCP you have the congestion control algo baked in hardware
This part isn't correct, and you wouldn't want e.g. NewReno baked in to your NIC and preventing you from using CUBIC or BBR. It's true that TCP benefits more from NIC offloads than QUIC but most places (besides Netflix) aren't driving enough WAN traffic per server to matter.
In my experience, congestion control in hardware would be the very last thing I would want. Everything needs to be pushed as far toward the edges of the system as possible. This is what quic offers.
With TCP you have the congestion control algo baked in hardware, tcp segment and checksum offload. You can pass things directly to the NIC for massive latency, bandwidth and offloading processing away from the cpu.
A lot of NICs are essentially just as software modems, with the CPU handling this anyhow. Even server hardware sometimes has these lame NIC chips on them.
Be careful to rid the specs of the hardware you buy, otherwise it will indeed be your CPU doing all that work.
The chart at the bottom: So google switched it on, and then, let us guess, Facebook and Netflix. Afterwards no growth for a year. Displacing looks different to me. HTTP/2 is growing at cost of HTTP/1 ... so the only conclusion I can draw here, is that HTTP/3 adoption has halted. Do not read that negatively. The users who really needed it (like Google, Facebook, Netfix, ...) are using it and the rest, has it very low on the priority list if at all.
I have my doubts that everyone needs HTTP/3, with UDP traffic has network device wise also its disadvantages and library availability and complexity should be lower for HTTP/1 and /2 for the foreseeable future.
I'm a fan of QUIC, but these articles are always heavy on explanations and light on data. I understand in theory how head-of-line blocking can cause serious issues on a lossy network, but by this point I would expect to see a ton of data backing that up in real-world usage from Google and Cloudflare.
One specific question I've had is on a lossy network are there really that many situations where you would have packet loss on one QUIC stream but not most/all the others? I don't doubt that's true but I would love to see a breakdown.
Also, what's the crossover point between opening multiple TCP streams and doing round robin across them? Maybe you only need 3-4 TCP connections to estimate the HOL advantages of QUIC.
I will admit fewer RTT handshakes is a more obvious win.
QUIC is very exciting, after seeing what it did for latency in Cloudflare network and Cloudflare workers, I can't wait to finally see it in Deno 1.41[0].
If congestion collapse happens on the wider internet, I predict that it would take mere hours for sysadmins to come up with a temporary fix, and within months it'll become common for all core routers to have bloom filters of common src/dst IP pairs and de-prioritize anyone sending more than their fair share of traffic on a congested link.
TCP also supports allowing people you've never met to connect to your IP without you getting the permission of a third party first and regularly. QUIC requires continued permission being allowed by some third party corporation. Baking the requirement for CA TLS into the protocol is fine for corporate uses but the internet is far more than just corporate uses.
Sure QUIC may be faster, but only as long as you keep getting permission from your TLS CA. Once that stops it's very, very, slow. QUIC is fragile and like other CA TLS things: anything built with it will have a lifetime of only a few years without being updated.
Do we really want to base our web protocol (and other things) on a system that will only allow machines updated in the last $fewyears to participate?
Additionally, the claims of the use rate for the various HTTP versions is flawed in that it is not going per webserver or domain, but instead just counting traffic and of course the megacorp flows it knows about will dominate (they're not checking every webserver). Looking where the light is fallacy.
> Sure QUIC may be faster, but only as long as you keep getting permission from your TLS CA. Once that stops it's very, very, slow. QUIC is fragile and like other CA TLS things: anything built with it will have a lifetime of only a few years without being updated.
For me, letsencrypt has been set-and-forget for at least 5 years. I use it with nginx, which is in front of several servers including a Jetty 5 server. Jetty 6 was released in 2006. You can configure which CAs you trust as well, so TLS isn't really any more of a centralization problem than DNS is.
Is it just me or the graph at the end of the article is not really showing rising adoption in general. It basically shows a couple of spikes - each corresponding to some big company switching their edge servers to HTTP/3 like Alphabet, Meta, Cloudflare. There is no gradual increase.
Furthermore, I don't see easy solutions yet for some problems with QUIC - for example browsers still try to establish a TCP connection first unless they know for sure the server supports QUIC. Proxy support for HTTP/3 is still in its infancy, but for many corporate envuronments it is a hard requirement.
So outside of the biggest websites, which I admit also take up a large chunk of the network traffic, is QUIC really replacing TCP in the general Internet?
Not really. HTTP3 has little benefit (if any) when talking to a backend server, plus browsers only support QUIC with TLS and you probably don't want to waste CPU cycles terminating TLS on an application server and leave it load balancer (that isn't written in nodejs and supports HTTP3)
Node.js shouldn't be used in the first place, just like any server-side JS. But more importantly, put your hacky language-specific servers behind a webserver like nginx.
I got bit hard doing network troubleshooting with Wireshark. I was trying to debug connectivity issues and was filtering for HTTP packets, but that was excluding QUIC! Turns out my packets were QUICked through Cloudflare. Figuring that out when hair is on fire isn't my preferred time.
[+] [-] rwmj|2 years ago|reply
[1] https://inst.eecs.berkeley.edu/~cs162/fa23/static/readings/j...
[+] [-] Veserv|2 years ago|reply
Their interaction is largely at the level of: “How easy does the protocol make it to estimate the data channel parameters?”, “What happens in the event of congestion-related failures (packet loss, delay, reorder, etc)?”, and “How do you efficiently recover and adapt to the new channel parameters?”
In all of these regards QUIC is quite a bit better than TCP.
[+] [-] bheadmaster|2 years ago|reply
[0] https://quicwg.org/base-drafts/rfc9002.html
[+] [-] 01HNNWZ0MV43FF|2 years ago|reply
I thought the point of congestion control was to slow down pointless sending when an overloaded middle box is dropping our packets and saying "buddy take the hint", not to actually yield to other traffic? The network tells us how much capacity we have, trying to push more would just increase our own stream's packet loss. ... Right?
[+] [-] Borg3|2 years ago|reply
I know that Internet is best effort network (ATM, anyone remember it?) but Im sure I would preffer slower but more stable internet.
[+] [-] flumpcakes|2 years ago|reply
I think HTTP3 is probably doomed for a IPv6 like existence for a long while. While everyone claims that TCP is apparently "too slow", the vast majority of corporate/enterprise settings will just block it.
It seems like a technology built by the big players who want to shave a few cycles off each connection and save $millions rather than a practical standard.
Do I want to use HTTP3 at home? Yeah, sounds cool.
Will I be able to use it at work? Probably not for 5+ years.
[+] [-] 01HNNWZ0MV43FF|2 years ago|reply
https://ifconfig.net/ uses HTTP/3 by virtue of Cloudflare's proxy using it. I'm on Firefox 115, half a year old, and the network inspector says I connect to ifconfig over HTTP/3.
The rollout's been really smooth and quiet. Browsers do the same "happy eyeballs" optimization as they did for HTTP/2 and SPDY, racing QUIC with TCP and using which-ever one connects first, or maybe dropping TCP if QUIC connects a couple milliseconds later.
Hell, CF actually rolled out HTTP/3 before the pandemic, in 2019 https://blog.cloudflare.com/http3-the-past-present-and-futur...
[+] [-] jeroenhd|2 years ago|reply
Sure, most servers will just support the protocol at the reverse proxy level and lack the optimisations that make HTTP3 faster for the big cloud providers that stand to gain most, but you can just set up HTTP3 on your own server if you want.
I don't know what you're using at work that prevents you from using HTTP3. I'm guessing you're referring to one of those awful middleboxes or some kind of firewall that blocks outgoing UDP traffic. Luckily, that stuff isn't relevant for most connections on the internet.
[+] [-] api|2 years ago|reply
[+] [-] foobiekr|2 years ago|reply
Also v6 did a bunch of things right. NDP and link local addresses for v6 actually work. The fragmentation changes were 100% the right thing to do. v6 extensions like SR and uSID are both solid and consolidate a lot of stupid stuff into something clean.
v6 had a bad, rocky start. A lot of that was completely inadequate attention to the transition, and the vendors fucked up almost everything for years, on their own gear and interop, but at this point v6 is fine.
Would I launch a v6 only service? No, because at this point the SPs are the problem. Verizon as of a few years ago and maybe now still didn't do v6 to any of their home users. So many things like that.
[+] [-] 1vuio0pswjnm7|2 years ago|reply
[+] [-] mike_hock|2 years ago|reply
[+] [-] unethical_ban|2 years ago|reply
I don't trust it.
[+] [-] unknown|2 years ago|reply
[deleted]
[+] [-] Slix|2 years ago|reply
[+] [-] ktzar|2 years ago|reply
> QUIC works on top of UDP. It overcomes the limitations of TCP by using UDP. It’s just a layer or wrapper on top of UDP.
Makes me want to stop reading.
[+] [-] animesh371g|2 years ago|reply
[+] [-] Ragnarork|2 years ago|reply
I could be inclined to believe it but I'd like to know by which factor, in which circumstances, with real examples and numbers.
[+] [-] kd913|2 years ago|reply
With TCP you have the congestion control algo baked in hardware, tcp segment and checksum offload. You can pass things directly to the NIC for massive latency, bandwidth and offloading processing away from the cpu.
A properly tuned system with a user space networking stack with ctpio, hardware offload and proper tuning beats the pants of QUIC for latency.
It is possible to get some of the same benefits I guess with GSO. In any case, the slow hardware support here I suspect is a bottleneck. You may not get much benefit given more layers are binary/encrypted and not visible to hardware.
The above is not relevant for hyperscalers like Google I imagine where they can make the hardware and the sheer amount of customers offer bandwidth benefits.
[+] [-] jabart|2 years ago|reply
[+] [-] wmf|2 years ago|reply
This part isn't correct, and you wouldn't want e.g. NewReno baked in to your NIC and preventing you from using CUBIC or BBR. It's true that TCP benefits more from NIC offloads than QUIC but most places (besides Netflix) aren't driving enough WAN traffic per server to matter.
[+] [-] jeffbee|2 years ago|reply
[+] [-] 01HNNWZ0MV43FF|2 years ago|reply
Is it because TLS is hardware accelerated with AES instructions or something already?
[+] [-] b112|2 years ago|reply
A lot of NICs are essentially just as software modems, with the CPU handling this anyhow. Even server hardware sometimes has these lame NIC chips on them.
Be careful to rid the specs of the hardware you buy, otherwise it will indeed be your CPU doing all that work.
[+] [-] oaiey|2 years ago|reply
I have my doubts that everyone needs HTTP/3, with UDP traffic has network device wise also its disadvantages and library availability and complexity should be lower for HTTP/1 and /2 for the foreseeable future.
[+] [-] apitman|2 years ago|reply
One specific question I've had is on a lossy network are there really that many situations where you would have packet loss on one QUIC stream but not most/all the others? I don't doubt that's true but I would love to see a breakdown.
Also, what's the crossover point between opening multiple TCP streams and doing round robin across them? Maybe you only need 3-4 TCP connections to estimate the HOL advantages of QUIC.
I will admit fewer RTT handshakes is a more obvious win.
[+] [-] tmikaeld|2 years ago|reply
[0] https://github.com/denoland/deno/pull/21942#issuecomment-192...
[+] [-] clbrmbr|2 years ago|reply
It’s been possible to cheat with TCP but most people don’t bother because it would require a kernel recompilation.
Or did this ship sail a while ago with various other UDP-based reliable transports, streaming media, etc?
[+] [-] londons_explore|2 years ago|reply
[+] [-] wmf|2 years ago|reply
[+] [-] superkuh|2 years ago|reply
Sure QUIC may be faster, but only as long as you keep getting permission from your TLS CA. Once that stops it's very, very, slow. QUIC is fragile and like other CA TLS things: anything built with it will have a lifetime of only a few years without being updated.
Do we really want to base our web protocol (and other things) on a system that will only allow machines updated in the last $fewyears to participate?
Additionally, the claims of the use rate for the various HTTP versions is flawed in that it is not going per webserver or domain, but instead just counting traffic and of course the megacorp flows it knows about will dominate (they're not checking every webserver). Looking where the light is fallacy.
[+] [-] brlewis|2 years ago|reply
For me, letsencrypt has been set-and-forget for at least 5 years. I use it with nginx, which is in front of several servers including a Jetty 5 server. Jetty 6 was released in 2006. You can configure which CAs you trust as well, so TLS isn't really any more of a centralization problem than DNS is.
[+] [-] unknown|2 years ago|reply
[deleted]
[+] [-] lini|2 years ago|reply
Furthermore, I don't see easy solutions yet for some problems with QUIC - for example browsers still try to establish a TCP connection first unless they know for sure the server supports QUIC. Proxy support for HTTP/3 is still in its infancy, but for many corporate envuronments it is a hard requirement.
So outside of the biggest websites, which I admit also take up a large chunk of the network traffic, is QUIC really replacing TCP in the general Internet?
[+] [-] 0xbadcafebee|2 years ago|reply
This paper from 2017 showed QUIC performance was poor: https://arxiv.org/pdf/2310.09423.pdf
With no evidence to the opposite, the article's claim that QUIC is displacing TCP for speed is dubious at best.
[+] [-] skerit|2 years ago|reply
[+] [-] 0x457|2 years ago|reply
[+] [-] animesh371g|2 years ago|reply
[+] [-] zilti|2 years ago|reply
[+] [-] nottorp|2 years ago|reply
That's what I get from the article.
What are the advantages of that thing if you're not at Google/Facebook scale?
[+] [-] raffraffraff|2 years ago|reply
edit: "tcpdump -t quic", and I also see that it's supported (as expected) by Wireshark.
[+] [-] freedomben|2 years ago|reply
[+] [-] spullara|2 years ago|reply
https://github.com/icing/blog/blob/main/curl-h3-performance....
[+] [-] dingi|2 years ago|reply
[+] [-] DarkmSparks|2 years ago|reply
how can a userpace protocol possibly hope to be faster than that?
[+] [-] hkgjjgjfjfjfjf|2 years ago|reply
[deleted]