top | item 9395630

A QUIC update on Google’s experimental transport

231 points| jplevine | 11 years ago |blog.chromium.org

123 comments

order
[+] josho|11 years ago|reply
Google really gets how standardization works. They innovate and once the innovation has proven its value they offer the technology for standardization.

I previously saw companies, like Sun, completely fail at this. Eg. The many Java specifications that were created by the standards bodies. Sun tried to do it right by having a reference implementation with the spec. But the Reference implementations were rarely used for real work, so it proved only that the spec could be built, not that the spec elegantly solved real problems.

[+] username223|11 years ago|reply
> Google really gets how standardization works. They innovate and once the innovation has proven its value they offer the technology for standardization.

I wouldn't necessarily say "innovate" or "offer," but they do understand the process. You can make pretty much anything a "standard" with a bit of money and time (isn't Microsoft Office's XML format a "standard"?), but adoption is always an issue. However, Google controls a popular web server (Google search) and client (Google Chrome), so for web-things, they can create "widespread adoption" for whatever they standardize.

[+] scrollaway|11 years ago|reply
Honest question: Do you think Hangouts will eventually be open sourced?

It's really messed up that all our non-xmpp (a massive majority) messaging goes over nonstandard protocols. All our daily communications are behind walled gardens; one of the sorriest states in tech today.

Hangouts going open source is one of the few ways this could change in a reasonable time frame.

[+] jws|11 years ago|reply
Today, roughly half of all requests from Chrome to Google servers are served over QUIC and we’re continuing to ramp up QUIC traffic

Google says they will eventually submit this to IETF and produce a reference implementation, but it is interesting how a company was able to quietly move a large user base from open protocols to a proprietary protocol.

[+] bla2|11 years ago|reply
You can't design protocols without implementation experience. Looks like they're going the same route they went with SPDY, and that has worked really well.
[+] _stephan|11 years ago|reply
The Chromium implementation of QUIC is released as Open Source, so I'm not sure how "proprietary" the protocol actually is.
[+] zaroth|11 years ago|reply
This is, after all, half the reason for making Chrome in the first place, right? All better protocols will start as proprietary protocols. To make the web better, faster, larger, yes, Google adds features to Chrome, and of course some of those are at the protocol level.

If the feature is actually an improvement, it should be on for everyone that's able to run the code as soon as possible. Ship fast and break nothing.

To address a different aspect of your comment, I do think it's very interesting how little attention we pay to the packets of data sent between software running on our personal devices and remote servers. Slap some TLS on it, and nobody even notices.

I think there's a fundamental OS level feature, and a highly visible UI component which is outright missing, allowing users to understand no just what programs are connecting to where, but what are they actually sending out and receiving. If it didn't have such horrendous implications and failure modes, I would love to have highly functional deep packet MitM proxy keeping tabs on exactly what my computer is doing over the network. You know, or the NSA could publish a JSON API to access their copy?

[+] jeremie|11 years ago|reply
As part of telehash v3, we've separated out most of the crypto/handshake packeting into E3X, which has a lot of similarities to QUIC: https://github.com/telehash/telehash.org/blob/master/v3/e3x/...

Personally I have a much broader use case in mind for E3X than QUIC is designed for, incorporating IoT and meta-transport / end-to-end private communication channels. So, I expect they'll diverge more as they both evolve...

[+] FullyFunctional|11 years ago|reply
MinimaLT [1], developed independently and about the same time as QUIC, also features the minimal latency, but with more (and better IMO) emphasis on security and privacy. (Though both are based on Curve25519). QUIC has an edge with header compression and an available implementation. EDIT: and of course, forward error correction!

[1] cr.yp.to/tcpip/minimalt-20130522.pdf

[+] djcapelis|11 years ago|reply
I hate to be harsh because I like a lot about MinimaLT, but until MinimaLT ships code it doesn't feature anything.

I wish we were having a conversation where djb had written an amazing and performant minimaLT implementation that we could prepare against QUIC. But we're not. We're having a conversation where shipping performant code runs a protocol and you're presenting an alternative that pretty much exists only as a PDF document.

Believe me, I looked to figure out if there was a good solution for incorporating MinimaLT into code right now and there's not. I have a project where this is relevant. I'm looking at QUIC now and I may incorporate it as an alternative transport layer. (It duplicates some of my own work though, so I'm not sure whether to strip that stuff out or just make mine flexible enough to work on top.)

(To say nothing that QUIC can be implemented without a kernel module, which is a handy side-effect of doing things over UDP. A shame that's a factor, but of course it is in realistic systems deployment.)

[+] jzawodn|11 years ago|reply
I wonder if this is why I've been having weird stalls and intermittent failures using GMail the last few weeks. Every time, I try it in Firefox or Safari and it works perfectly.
[+] svijaykr1|11 years ago|reply
I work on the QUIC team. If you file a bug with a network log, we'll take a look to see what is going on.
[+] portmanteaufu|11 years ago|reply
Possibly silly question: I was under the impression that only TCP allowed for NAT traversal; if I send a UDP packet to Google, how can Google respond without me configuring my router?
[+] gliptic|11 years ago|reply
NAT traversal isn't necessary when you send packets out of your network, be they TCP or UDP. That's standard operation for NATs.
[+] FreakyT|11 years ago|reply
Interesting, I wonder if this will will end up gaining enough momentum to become a standard, similarly to how SPDY ended up essentially becoming HTTP/2.
[+] rpcope1|11 years ago|reply
It will be interesting to see how this works out with NAT being as difficult to work with UDP as it often can be.

It's a shame that SCTP is not more widely adopted, as I suspect it may be just as good (if not better) as a transport layer for building a new web protocol on.

[+] lucian1900|11 years ago|reply
It's unlikely that DTLS over SCTP would be faster than QUIC, which has been specifically designed to have TLS with a minimal number of round trips.
[+] Fando|11 years ago|reply
I wonder how they managed the zero RTT connections? How would that ever work?
[+] api|11 years ago|reply
Crypto? You can know who your peer is with a single packet if you've already exchanged keys, and other cleverness is also possible.
[+] rdsubhas|11 years ago|reply
I'm not sure if this is related. But sometimes I have a slow home internet (60 kbps after I cross a threshold). At those times, I see websites loading really slow, specially HTTPS connections crawling - But YouTube streaming, Google search and Google webcache works really fast! In fact I've been waiting for a normal website to load for a few minutes on my PC, and the whole time YouTube was streaming in another mobile without any interruptions.

Does UDP mess up other traffic?

[+] Splendor|11 years ago|reply
The first image really confused me with the 'Receiver' looking like a server and the 'Sender' looking like a laptop.
[+] antirez|11 years ago|reply
50% and nobody noticed. Can't wait for another marginal latency win that makes the software stack more complex.
[+] billyhoffman|11 years ago|reply
> Software Stack more complex

Does QUIC make things more complex? You are replacing TCP + TLS with roughly TLS over UDP with some reliability features build in. TLS and TCP are already crazy complex (behold the state diagram from closing a TCP connection! [CS undergrad heads explode]). Plus, people have already built a number pseudo TCP protocols run over UDP.

QUIC + their kindof TLS lite protocol is certainly newer and less well know. That may make things a little harder. But ARP is complex. IP is complex. TCP is complex. Wireshark and others largely abstract this away. I'm excited by the speed, and by the hopefully reduced attack surface of these potentially simpler protocols.

[+] tptacek|11 years ago|reply
Isn't this an argument that nothing in TCP/IP should change, and that we should still be pretending that there is a point to the URG pointer?
[+] panopticon|11 years ago|reply
I think the win here is for content providers more so than end users. Might not be a large overhead to the users, but I'm sure it saves Google tons of bandwidth over their hundreds of millions of users.
[+] easytiger|11 years ago|reply
I assume they aren't counting the transit time of the first SYN equivalent? Are they saying it traverses the network infinitely fast. Because it doesn't
[+] polskibus|11 years ago|reply
Google should investigate (or perhaps just buy outright) a low level communications technology stack from one of the HFT firms - they've already mastered low-latency networking, they just have no incentive to share this knowledge with the outside world.
[+] chollida1|11 years ago|reply
I think you'd be a bit disappointed with what HFT firms do.

They are limited in what they can do because they have to talk to the exchange, so its still tcp/ip for order sending, with either FIX or a binary protocol like ITCH/OUCH on top.

as far as their networking stack, if they are ultra low HFT then they'll use FPGA's and Arista brand switches or Infinband hardware.

The only big customization that most HFT firms do is move the networking stack into user land, but that's a well known area. I"m not aware of any HFT firms that write their own networking stack from the ground up though I'm sure there are a few:)

Not much that they do is transferable to everyday computing because most, I'd say 90%, of the performance comes from custom hardware and not the software.

Or put another way, Google already has more than enough talent to optimize their QUIC protocol, buying a HFT firm wouldn't do much for them as the HFT speed comes from area's that most people setting up servers won't want touch.

[+] _delirium|11 years ago|reply
I think Google's solving a pretty different problem, low-latency communications over a quite heterogeneous network, where they don't control the lower-level infrastructure. HFT firms typically have a narrow range of configurations they have to work over and control the setup of their pipe; they aren't trying to ship technology that will work over every random person's DSL line and funky NAT setup.
[+] JoshTriplett|11 years ago|reply
Those communication stacks are not suitable for general-purpose use; they sacrifice everything, including usability, robustness, portability, and a hundred other factors in favor of latency.

For example, such stacks often put the entire communication stack in userspace, with hardcoded knowledge of how to talk to a specific hardware networking stack, and no ability to cooperate with other clients on the same hardware.

[+] polskibus|11 years ago|reply
There are plenty of vendors that provide good UDP-based solutions, for example TIBCO. In my opinion multicast is not used widely enough, partially because everybody thinks that tcpip pub sub is good enough.

Financial incentives made HFT's and alike go farther than the average software companies - just look at the microwave networks.

[+] polskibus|11 years ago|reply
first 3 upvotes, then lots of downvotes - what's wrong with my comment? Why is it so bad to advise buying companies that might have the edge over google? I actually value the comments because they pointed out most HFTs probably do not have the it.
[+] higherpurpose|11 years ago|reply
Wasn't the point of QUIC that it's basically encrypted UDP? I'm not seeing that great of a performance improvement here - 1 second shaved off the loading of top 1% slowest sites. Are those sites that load in 1 minute? Then 1 second isn't that great.

However, if the promise is to be an always-encrypted Transport layer (kind of like how CurveCP [1] wanted to be - over TCP though) with small performance gains - or in other words no performance drawbacks - then I'm all for it.

I'm just getting the feeling Google is promoting it the wrong way. Shouldn't they be saying "hey, we're going to encrypt the Transport layer by default now!" ? Or am I misunderstanding the purpose of QUIC?

[1] - http://curvecp.org/

[+] comex|11 years ago|reply
The first diagram, if I'm interpreting it correctly, shows two whole round trip times shaved off compared to TCP + TLS, and one compared to plain TCP (which is basically no longer acceptable). For a newly visited site, that becomes one and zero.

The 100ms ping time in the diagram may be pretty high for connections to Google, with its large number of geographically distributed servers, but for J. Random Site with only one server... it's about right for US coast-to-coast pings, and international pings are of course significantly higher. [1] states that users will subconsciously prefer a website if it loads a mere 250ms faster than its competitors. If two websites are on the other coast, have been visited before, and are using TLS, one of them can get most of the way to that number (200ms) simply by adopting QUIC! Now, I'm a Japanophile and sometimes visit Japanese websites, and my ping time to Japan is about 200ms[2]; double that is 400ms, which is the delay that the same article says causes people to search less; not sure this is a terribly important use case, but I know I'll be happier if my connections load faster.

Latency is more important than people think.

[1] http://www.nytimes.com/2012/03/01/technology/impatient-web-u...

[2] http://www.cloudping.info

[+] asuffield|11 years ago|reply
(Tedious disclaimer: my opinion, not my employer's. Not representing anybody else. I work at Google and have some involvement in this project.)

Others have discussed the technical aspects of what QUIC is achieving, but you can understand its purpose fairly easily by saying "QUIC" out loud ;)

If that's not clear enough, it stands for "Quick UDP Internet Connections", which I think makes it fairly clear what it achieves. You can read more about it in the FAQ: https://docs.google.com/a/chromium.org/document/d/1lmL9EF6qK...

Note that the blog post doesn't say "1% slowest sites", it says "1% slowest connections" - that's the mobile and satellite users. Think about how many seconds it takes to load google.com on your phone when your signal isn't great. How does taking a second off that sound to you?

[+] wmf|11 years ago|reply
QUIC was never intended to be encrypted UDP, although plenty of people had that misinterpretation. (DTLS is already encrypted UDP.) QUIC is a replacement for TCP and TLS.
[+] billyhoffman|11 years ago|reply
You got the stats wrong. Its always the same site specifically some unnamed Google property (they used Search and Youtube as other examples so it could be one of them).

Google is say that, for clients connecting to the same site, the slowest 1% of those clients saw a 1 second improvement in page load time by using QUIC instead of TCP. (presumably its SPDY + QUIC against SPDY + TCP as they say at the end of the article). That's pretty good.

It was 1 second shaved