Am I understanding things correctly? Because QUIC ramps up its bandwidth estimate more aggressively, it fundamentally competes unfairly with TCP? Is that an inherent property or something that can be fixed? Definitely it's too late to fix it on the TCP side
Nice to know 7% of Internet traffic isn't playing fair with the rest just so one company's content loads a few ms faster! Really don't know how I feel about TCP in general losing out to QUIC if it saw much wider deployment outside Google
And finally, it's incredibly disappointing to read these results from a third party, rather than having a balanced perspective as part of the original marketing
QUIC fundamentally competes on a different level because TCP is broken for modern networks.
From a network usage standpoint, there isn't anything "unfair" about it -- QUIC runs atop UDP, which is also a perfectly acceptable and publicly available Internet protocol. And Chrome will happily use QUIC with non-Google hosts, the real problem is that production-quality QUIC server software is currently rare outside of Google.
I’ve often wondered how much cheating goes on in TCP and who would notice.
Many years ago my company made a VPN which tunneled through TCP† specifically designed for heavily loaded networks. I changed the Linux kernel to cap our back off time to something like 2 seconds because getting to a 2 minute back off just sucks badly when you are encapsulating a bunch of streams. Happy customers. Happy us, no more “the VPN hangs” complaints.
␄
† Yes, "don't do that", but when you have a mission critical unchangeable protocol sending enormous multi fragment UDP datagrams over a network path with a 10% packet loss rate, something has to give.
I'd like to know what the efficiency gap is - you can see in the graph that early on, quic is better saturating the connection, while TCP is still scaling up. The other problem is related to how TCP connections work with HTTP/1.1 or similar protocols - one TCP connection per concurrent HTTP request, where with quic any arbitrary number of requests are all part of the same quic stream. It doesn't seem unfair that adding more TCP connections doesn't make quic reduce its share of bandwidth, any more than adding 1000 concurrent http requests into the quic stream should reduce TCP's share of bandwidth.
Google makes QUIC available for others to use if desired. QUIC sits on top of UDP so any other service that uses UDP you could make a similar argument they are optimizing for their service. Which everyone should be doing. Your post sounds more like it is driven more by some issue you have with Google then QUIC.
This is a pretty disappointing article. It's really comparing a particular implementation (or maybe multiple implementations) of TCP congestion control, implemented in kernel space, with a particular implementation of QUIC congestion control, implemented in user space.
The article points out that both the QUIC and TCP implementations they tested are using CUBIC congestion control, but that's not enough information, because the article also points out that QUIC is using "more aggressive parameters." It's tough to say which parameters are better, but what's unsaid is that a TCP implementation could change their parameters and get the same congestion control results as QUIC.
The supposed poor performance of QUIC on resource-limited mobile devices is, as they point out, because it's a user space implementation that is thus more expensive. If QUIC becomes popular, I assume there will be kernel implementations that are as resource efficient as TCP. Meanwhile, it's a lot easier to to experiments (such as tuning CUBIC parameters!) when you don't have to reboot to install a new version.
It's also quite common for various TCP congestion controllers to completely fail to saturate busy links, because of various limitations. If that happens, it might be that QUIC is able to fill the empty space, thereby taking "more than its fair share" only because TCP wasn't going to use that share anyway. Since the article doesn't even say which TCP implementation it's comparing against, and doesn't say what happens when the TCP sessions are competing only amongst themselves with no QUIC present, it's hard to say what's going on.
The funny thing about all this is that, if they use the same congestion control (which seems to be the intention given that they are both using CUBIC and Google is separately trying to fix congestion control via BBR[1]), they should both be about equally fair. The performance benefits of QUIC are not even congestion control related!
+1 to this. QUIC allows using CUBIC BBR [1], so a comparison based on the exact parameters used is actually comparing the exact parameters used.
The performance effects of QUIC implementing congestion control in userland are more interesting. OTOH, QUIC allows deploying new features to users (through cronet) in an efficient way. TCP does not.
If you want to try QUIC yourself, you can use Caddy, which has experimental QUIC support (https://caddyserver.com/docs/cli#quic) powered by quic-go.
(However, QUIC versions are extremely transient and has very limited support in Chrome; the next release of Caddy will bring it back up-to-date again.)
What's the current positioning of QUIC by Google? It predates HTTP/2, and Google hasn't been making noises about it lately. Is it being slowly phased out or just in a "let it be" status for now in Chrome?
HTTP/2 replaces HTTP/1.1 (and is a direct descendent of SPDY, which has origins ~2009).
HTTP/2 on TCP gets you some of the benefits that QUIC provides (multiplexing, reduced roundtrips), but HTTP/2 on QUIC is the best of both worlds.
IMO, there's not much noise about QUIC mainly because most people don't care enough about latency to care, and the current lack of production-quality QUIC server software makes caring more complicated than "just enable it", so low demand all around. This should pick up when it gets closer to a ratified standard.
Um, pretty sure QUIC was in the pipeline a few years longer than AMP. Besides, it's literally a protocol in the network stack -- would you say IPv6 is an enemy of the web and underlying internet?
This has nothing to do with Google's politics. TCP is broken for modern networks, and the faster we ditch it for a protocol capable of reliable pipes with low latency and high throughput, the better off we'll all be.
I'm not sure it's all that bad. AMP is awful, particularly because of the way in which Google abuses it, but QUIC is less of a problem – it's on an IETF standards track with a couple of implementations.
[+] [-] _wmd|8 years ago|reply
Nice to know 7% of Internet traffic isn't playing fair with the rest just so one company's content loads a few ms faster! Really don't know how I feel about TCP in general losing out to QUIC if it saw much wider deployment outside Google
And finally, it's incredibly disappointing to read these results from a third party, rather than having a balanced perspective as part of the original marketing
[+] [-] trevyn|8 years ago|reply
From a network usage standpoint, there isn't anything "unfair" about it -- QUIC runs atop UDP, which is also a perfectly acceptable and publicly available Internet protocol. And Chrome will happily use QUIC with non-Google hosts, the real problem is that production-quality QUIC server software is currently rare outside of Google.
[+] [-] jws|8 years ago|reply
Many years ago my company made a VPN which tunneled through TCP† specifically designed for heavily loaded networks. I changed the Linux kernel to cap our back off time to something like 2 seconds because getting to a 2 minute back off just sucks badly when you are encapsulating a bunch of streams. Happy customers. Happy us, no more “the VPN hangs” complaints.
␄
† Yes, "don't do that", but when you have a mission critical unchangeable protocol sending enormous multi fragment UDP datagrams over a network path with a 10% packet loss rate, something has to give.
[+] [-] charleslmunger|8 years ago|reply
[+] [-] jacksmith21006|8 years ago|reply
[+] [-] apenwarr|8 years ago|reply
The article points out that both the QUIC and TCP implementations they tested are using CUBIC congestion control, but that's not enough information, because the article also points out that QUIC is using "more aggressive parameters." It's tough to say which parameters are better, but what's unsaid is that a TCP implementation could change their parameters and get the same congestion control results as QUIC.
The supposed poor performance of QUIC on resource-limited mobile devices is, as they point out, because it's a user space implementation that is thus more expensive. If QUIC becomes popular, I assume there will be kernel implementations that are as resource efficient as TCP. Meanwhile, it's a lot easier to to experiments (such as tuning CUBIC parameters!) when you don't have to reboot to install a new version.
It's also quite common for various TCP congestion controllers to completely fail to saturate busy links, because of various limitations. If that happens, it might be that QUIC is able to fill the empty space, thereby taking "more than its fair share" only because TCP wasn't going to use that share anyway. Since the article doesn't even say which TCP implementation it's comparing against, and doesn't say what happens when the TCP sessions are competing only amongst themselves with no QUIC present, it's hard to say what's going on.
The funny thing about all this is that, if they use the same congestion control (which seems to be the intention given that they are both using CUBIC and Google is separately trying to fix congestion control via BBR[1]), they should both be about equally fair. The performance benefits of QUIC are not even congestion control related!
[1] https://queue.acm.org/detail.cfm?id=3022184
[Disclaimer: I've worked with some of the people who wrote BBR and QUIC, so I'm biased.]
[+] [-] chemag|8 years ago|reply
The performance effects of QUIC implementing congestion control in userland are more interesting. OTOH, QUIC allows deploying new features to users (through cronet) in an efficient way. TCP does not.
[1] https://chromium.googlesource.com/chromium/src/net/+/master/...
> [Disclaimer: I've worked with some of the people who wrote BBR and QUIC, so I'm biased.] Ditto
[+] [-] twouhm|8 years ago|reply
[0]: https://github.com/skywind3000/kcp/blob/master/README.en.md
[+] [-] newman314|8 years ago|reply
[+] [-] mholt|8 years ago|reply
(However, QUIC versions are extremely transient and has very limited support in Chrome; the next release of Caddy will bring it back up-to-date again.)
[+] [-] fulafel|8 years ago|reply
[+] [-] apenwarr|8 years ago|reply
[+] [-] trevyn|8 years ago|reply
HTTP/2 replaces HTTP/1.1 (and is a direct descendent of SPDY, which has origins ~2009).
HTTP/2 on TCP gets you some of the benefits that QUIC provides (multiplexing, reduced roundtrips), but HTTP/2 on QUIC is the best of both worlds.
IMO, there's not much noise about QUIC mainly because most people don't care enough about latency to care, and the current lack of production-quality QUIC server software makes caring more complicated than "just enable it", so low demand all around. This should pick up when it gets closer to a ratified standard.
[+] [-] unknown|8 years ago|reply
[deleted]
[+] [-] minikites|8 years ago|reply
[deleted]
[+] [-] halfteatree|8 years ago|reply
Um, pretty sure QUIC was in the pipeline a few years longer than AMP. Besides, it's literally a protocol in the network stack -- would you say IPv6 is an enemy of the web and underlying internet?
Please stop the needless fear mongering.
[+] [-] trevyn|8 years ago|reply
[+] [-] matthewmacleod|8 years ago|reply