Note well: the claims about TCP come with some evidence, in the form of a graph. The claims for QUIC do not.
Many of the claims are dubious. TCP has "no notion of multiple steams"? What are two sockets, then? What is poll(2)? The onus is on QUIC to explain why it’s better for the application to multiplex the socket than for the kernel to multiplex the device. AFAICT that question is assumed away in a deluge of words.
If the author thinks it’s the "end of TCP sockets", show us the research, the published papers and meticulous detail. Then tell me again why I should eschew the services of TCP and absorb its complexity into my application.
Even the TCP graph is dubious. Cubic being systematically above the link capacity makes me chuckle. Yes bufferbloat can have cubic "hug" a somewhat higher limit, but it still needs to start under the link capacity.
The obnoxious thing is that overly aggressive firewalls have killed any IP protocols that are not TCP or UDP. Even ICMP is often blocked or partially blocked.
Why would you need another IP protocol besides UDP? Anything you can do directly under an IP header, you can do under a UDP header as well, and the UDP header itself is tiny.
Going back to David Reed, this is specifically why UDP exists: as the extension interface to build more non-TCP transport protocols.
^^^^ this. I work for a big company (15k engineers). Trying to use anything that is not TCP or UDP simply doesnt work here. For years, even UDP was blocked and the answer we got was always "why are you using UDP, use TCP instead". Yep you read that right. Most of these folks are very short sighted or narrow minded. We tried to use SCTP for one project, major blunder. Zero support from network teams. Sctp is blocked everywhere. All their custom software and scripts for network deployments only work with tcp and udp. And they will not change that. And that comes from higher ups the ppl in charge. They are set in there ways and will not budge. As for as QUIC support? Never gonna happen here.
One thing to note is that using HTTP2.0 for anything other than "this is not how to design high throughput protocols" is unfair.
At the time HTTP2.0's multiplexing was known to be bad for anything other than perfect, low latency networks. I hope this was because people had faith in better connectivity, rather than ignorance of how mobile and non-lan traffic worked.
You should probably at least try QUIC now, but you can get past HOL blocking by having multiple TCP streams. Its super cheap, cheaper than QUIC.
> you can get past HOL blocking by having multiple TCP streams. Its super cheap, cheaper than QUIC
And also super inefficient, since it duplicates the TLS handshake across streams and uses more resources in the OS and middleboxes (like them or hate them, they're a thing that might throttle you if you go too crazy with connection-level parallelism).
That's on top of very poor fairness at bottlenecks (which is per TCP stream unless there's separate traffic policing).
Even on perfect networks HoL blocking is an issue. If the receiving end of a set of stream blocks on one stream, it ends up blocking the whole connection. One stream stops due to backpressure, all streams stop.
I used QUIC extensively to implement https://github.com/connet-dev/connet and while I'm super happy with how it turned out, I think QUIC currently suffers from some immaturity - most implementations are still ongoing/not production ready (for example in java) and in many cases it is only viewed as a way to power on HTTP/3, instead of being self-standing protocol/API that ppl can use (for example, trying to use quic in android).
In any case, I'm optimistic that QUIC has a bright future. I don't expect it to replace TCP, but give us another tool we can use when it is called for.
> QUIC’s design intentionally separates the wire protocol from the congestion control algorithm
Is that not the case for TCP as well? Most congestion control algorithms just assign new meanings to existing wire-level flags (e.g. duplicate ACKs), or even only change sender-side behavior.
> QUIC gives* control back to application developers to tailor congestion control to their use case*
That's what it actually does: It moves the congestion control implementation from the OS to user space.
In that sense, it's the same tradeoff as containers vs. regular binaries linking to shared libraries: Great if your applications are updated more often than your OS; not so great if it's the other way around.
> QUIC gives control back to application developers to tailor congestion control to their use case
If I understood modern application development correctly, this interprets as "The developers will import another library which they don't understand and will wreak havoc on other applications' data streams by only optimizing stuff for themselves".
Again, if I remember correctly, an OS is "the layer which manages the sharing of limited resources among many processes which requests/needs it", and the OS can do system-wide, per socket congestion control without any effort because of the vantage point it has over networking layer.
Assuming that every application will do congestion control correctly while not choking everyone else even unintentionally with user space's limited visibility is absurd at worst, and wishful thinking at best.
The whole ordeal is direct violation with application separation coming with protected mode.
> Running in user space offers more flexibility for resource management and experimentation.
I stopped reading here. This isn’t really an essential property of QUIC, there’s a lot of good reasons to eventually try to implement this in the kernel.
Maybe not an essential property of QUIC, but definitely one of not using TCP.
Most OSes don't let you send raw TCP segments without superuser privileges, so you can't just bring your own TCP congestion control algorithm in the userspace, unless you also wrap your custom TCP segments in UDP.
Even if you have nothing to hide and don't care about accidental or intentional data modification, the benefit of largely cutting out "clever" middleboxes alone is almost always worth it.
QUIC would be the end of the free internet if it ever "took over" but luckily it won't. It's not built to do so, it's only built for corporate use cases.
QUIC implementations do not allow for anyone to connect to anyone else. Instead, because it was built entirely with corporate for-profit uses cases in mind and open-washed through the IETF, the idea of a third party coporation having to authenticate the identity of all connections is baked in. And 99.999% of QUIC libs, and the way they're shipped in clients, cannot even connect to a server without a third party corp first saying they know the end point and allow it. Fine for corporate/profit use cases where security of the monetary transactions is all that matters. Very much less fine for human uses cases where it forces centralization and easy control by our rapidly enshittifying authoritarian governments. QUIC is the antithesis to the concept of the internet and it's robustness and routing around damage.
I guess you are referring to the TLS requirement? I guess I could see how on a more restrictive platform like a phone you could conceivably be prevented from accepting alternate CAs or self signed certificates.
Huh, I never knew, I've been using QUIC on my Raspberry Pi's web server for years... Did I unknowingly go corporate!?
Even if you don't want to get a Letsencrypt certificate, you can always use a self-signed one and configure your clients to trust it on first use or entirely ignore it.
SSH also uses "mandatory host keys", if you think about it. It's really not a question of the protocols but rather of common client libraries and tooling.
There's a fairly far a long draft for replacing webrtc's SCTP with QUIC for doing p2p work. It doesn't seem to have any of these challenges, seems to be perfectly viable there for connecting peers. https://github.com/w3c/p2p-webtransport
Alas alas, basically stalled out, afaik no implementation. I wish Microsoft (the spec author) or someone would pick this back up.
jklowden|4 months ago
Many of the claims are dubious. TCP has "no notion of multiple steams"? What are two sockets, then? What is poll(2)? The onus is on QUIC to explain why it’s better for the application to multiplex the socket than for the kernel to multiplex the device. AFAICT that question is assumed away in a deluge of words.
If the author thinks it’s the "end of TCP sockets", show us the research, the published papers and meticulous detail. Then tell me again why I should eschew the services of TCP and absorb its complexity into my application.
garganzol|4 months ago
tuetuopay|4 months ago
cbluth|4 months ago
IgorPartola|4 months ago
In the mean time we could have had nice things: https://en.wikipedia.org/wiki/Stream_Control_Transmission_Pr...
SCTP would be a fantastic protocol for HTTP/HTTPS. Pipelining, multi homing, multi streaming, oh my.
tptacek|4 months ago
Going back to David Reed, this is specifically why UDP exists: as the extension interface to build more non-TCP transport protocols.
yobid20|4 months ago
fulafel|4 months ago
CuriouslyC|4 months ago
But even UDP is heavily restricted in most cases.
lxgr|4 months ago
WebRTC has been doing just that for peer to peer data connections (which need TCP-like semantics while traversing NATs via UDP hole punching).
Hydraulix989|4 months ago
KaiserPro|4 months ago
At the time HTTP2.0's multiplexing was known to be bad for anything other than perfect, low latency networks. I hope this was because people had faith in better connectivity, rather than ignorance of how mobile and non-lan traffic worked.
You should probably at least try QUIC now, but you can get past HOL blocking by having multiple TCP streams. Its super cheap, cheaper than QUIC.
lxgr|4 months ago
And also super inefficient, since it duplicates the TLS handshake across streams and uses more resources in the OS and middleboxes (like them or hate them, they're a thing that might throttle you if you go too crazy with connection-level parallelism).
That's on top of very poor fairness at bottlenecks (which is per TCP stream unless there's separate traffic policing).
tuetuopay|4 months ago
Ingon|4 months ago
In any case, I'm optimistic that QUIC has a bright future. I don't expect it to replace TCP, but give us another tool we can use when it is called for.
lxgr|4 months ago
Is that not the case for TCP as well? Most congestion control algorithms just assign new meanings to existing wire-level flags (e.g. duplicate ACKs), or even only change sender-side behavior.
> QUIC gives* control back to application developers to tailor congestion control to their use case*
That's what it actually does: It moves the congestion control implementation from the OS to user space.
In that sense, it's the same tradeoff as containers vs. regular binaries linking to shared libraries: Great if your applications are updated more often than your OS; not so great if it's the other way around.
bayindirh|4 months ago
If I understood modern application development correctly, this interprets as "The developers will import another library which they don't understand and will wreak havoc on other applications' data streams by only optimizing stuff for themselves".
Again, if I remember correctly, an OS is "the layer which manages the sharing of limited resources among many processes which requests/needs it", and the OS can do system-wide, per socket congestion control without any effort because of the vantage point it has over networking layer.
Assuming that every application will do congestion control correctly while not choking everyone else even unintentionally with user space's limited visibility is absurd at worst, and wishful thinking at best.
The whole ordeal is direct violation with application separation coming with protected mode.
nickysielicki|4 months ago
I stopped reading here. This isn’t really an essential property of QUIC, there’s a lot of good reasons to eventually try to implement this in the kernel.
https://lwn.net/Articles/1029851/
lxgr|4 months ago
Most OSes don't let you send raw TCP segments without superuser privileges, so you can't just bring your own TCP congestion control algorithm in the userspace, unless you also wrap your custom TCP segments in UDP.
yencabulator|4 months ago
> Why QUIC’s user-space transport lets us ‘kill’ the old app-level event loop
But then doesn't seem to mention that topic ever again. I don't see how QUIC changes that much.
exabrial|4 months ago
lxgr|4 months ago
mannyv|4 months ago
Is that really a QUIC thing?
superkuh|4 months ago
QUIC implementations do not allow for anyone to connect to anyone else. Instead, because it was built entirely with corporate for-profit uses cases in mind and open-washed through the IETF, the idea of a third party coporation having to authenticate the identity of all connections is baked in. And 99.999% of QUIC libs, and the way they're shipped in clients, cannot even connect to a server without a third party corp first saying they know the end point and allow it. Fine for corporate/profit use cases where security of the monetary transactions is all that matters. Very much less fine for human uses cases where it forces centralization and easy control by our rapidly enshittifying authoritarian governments. QUIC is the antithesis to the concept of the internet and it's robustness and routing around damage.
sleepydog|4 months ago
lxgr|4 months ago
Even if you don't want to get a Letsencrypt certificate, you can always use a self-signed one and configure your clients to trust it on first use or entirely ignore it.
SSH also uses "mandatory host keys", if you think about it. It's really not a question of the protocols but rather of common client libraries and tooling.
jauntywundrkind|4 months ago
Alas alas, basically stalled out, afaik no implementation. I wish Microsoft (the spec author) or someone would pick this back up.