Personally I am _very_ excited by HTTP/3 (and QUIC), it feels like the building block for Internet 2.0, with connection migration over different IPs, mandatory encryption, bidirectional streams and it being a user-space library – sure, more bloat, but from now on we won't have to wait for your kernel to support feature X, or even worse, your ISP-provided router or decade old middleware router on the Internet.
I haven't had the chance to read the actual spec yet, but it's obvious that while the current tech (HTTP2) is an improvement over what we had before, HTTP/3 is a good base to make the web even faster and more secure.
HTTP/3 won't be IPv6: it only requires support from the two parties that benefit from it the most: browser vendors and web server vendors. We won't have to wait on the whole internet to upgrade their hardware.
I'm worried, not because of the standard itself, which seems well thought out, even if rushed.
I'm worried because you have a protocol implemented in the userland for a few mainstream languages. It seems everyone now has to pay the price of a protocol implementation on top of a protocol implementation on top of a protocol implementation. Big players---either because they have thousands of open source developers, or is backed by a corporation---they have it easy. Smaller players? Not so much.
Also, note that the exact problem that HTTP/3 tries to solve was known in the design process of HTTP/2 and some people even noted having multiple flow control schemes at multiple layers would become a problem. We are letting the same people design the next layer, and probably too fast in the name of time to market.
This should definitely live in a way people can make use of it easily, with an API highly amenable to binding. If it gains traction, we need a new UDP interface to the kernel as well, for batching packets back and forth. This kills operating system diversity as well, or runs the risk at doing so.
OTOH, I see the lure: SCTP never caught on for a reason, and much of this is the opposite of my above worries.
Irrespective of the protocol, my optimism for the future of the web has been curtailed by developments like extensions having less and less power over time (recent example is Google's plans to intentionally cripple ad blockers), plugins going away, hobbyist websites becoming more burdensome to set up and maintain if insecure http is deprecated, browsers planning to disable autoplay, etc. It feels like the golden age of the creative and vibrant web peaked during the brief window where all the new HTML5 stuff was around, Firefox used the old extension system, and Flash and Java applets were still common.
After that point it's been becoming more and more sterilized. My web apps that automatically played some sound aren't going to work anymore without some obnoxious "click here to begin" screen that doesn't fit in with the content. No more plugins letting us extend our browsers in new ways (what a convenient "coincidence" for Google that this gives them more control over what the user gets to do and makes tracking what goes on easier). I have to give Reddit Enhancement Suite permission every single time it tries to show a preview from a domain it hasn't previously done so from before. It's all suffocating. HTML5 makes up for some of the lost capability but it's not enough and what parts of HTML5 are going to actually work are basically at the whim of Google now.
But at least HTTP/3 will let us load buzzfeed listicles a few milliseconds faster, so there's that.
It's a misconception that you have to wait with IPv6.
If you're a large organisation you can move to IPv6 "today". What you do is, internally you cease buying IPv4-only gear and using IPv4 addressing etcetera. Everything inside is purely IPv6. A lot of your networking gets simpler when you do this, and debugging is a LOT smoother because there's no more "Huh 10.0.0.1, could be _anything_" everything has globally unique addresses because it's not crammed into this tiny 32-bit space.
At the edge, you have protocol translators to get from IPv6 (which all your internal stuff users) to IPv4 (which some things on the Internet use) but you probably already had a bunch of gear at the edge anyway, to implement corporate policies like "No surfing for porn at work" and "Nobody from outside should be connecting to port 22 on our machines!".
This isn't really practical for "One man and an AWS account" type businesses where your "Internet access" is a Comcast account and an iPhone, but if you're big enough to actually have an IT department, suggest they look into it. It may be cheaper and simpler than they'd realised.
It may be my embedded developer bias but I don't actually consider moving things outside of the kernel to be necessarily a good thing. Standard kernel interfaces are (usually) a guarantee of stability and good isolation, are generally easier to probe using standard tools, easier to accelerate in hardware etc... Not everything should be in the kernel of course, but low level network protocols should be IMO because they're good targets for potential hardware-acceleration (I'm convinced that it would make sense to handle SSL in-kernel for instance, with a userland daemon handling certificate validation, but that's a story for an other day).
I mean, if you can easily update whatever userland library you're using, why can't you upgrade your OS? If the library is easy to upgrade it means that it uses a well defined and backward-compatible interface. What do you get by shifting everything one layer up? In the end it's just software, there's not really any reason why upgrading a kernel driver should be any harder than upgrading a .so/.dll.
So the logic is "kernels are too slow to update and integrate the last new standards, so let's just move everything one step up because browsers auto-update"? Except that there's no technical reason for that, on my Linux box my browser and my kernel are updated at the same time when I run "apt-get upgrade" or "pacman -Syu" or whatever applies. The kernel I'm using at the moment has been built less than a week ago.
So if the problem is that Windows sucks balls and as a result people end up effectively re-creating an operating system on top of it to work around that, then yeah, from a practical standpoint I get it but I'm definitely not "_very_ excited" about it. It's a rather ugly hack.
If in general if the question is "who do you trust more to select and implement new internet standards, kernel developers or web developers?" then I take a side-glance at the few GBs used by my web browser to display a handful of static pages at the moment and I know the answer as far as I'm concerned...
So yeah, it might make sense, but I still think it just goes to show what a shitshow modern software development has become. Instead of fixing things we just add a new layer on top and we rationalize that it's better that way.
Honestly, I'd feel much better if people were standardizing QUIC and we simply run HTTP/1.1 over it.
Instead we now have transport layers that are application specific and 3 completely different web protocols with none of them being considered legacy, 2 of them being complex enough that people aren't very willing to move.
That does not look like a good foundation for anything.
Googlenet is not internet 2.0 and barely anyone in the world beyond a couple of megacorps can benefit from HTTP/2, HTTP/3, HTTP/4, etc. It feels more like the web is dead, completely captured by megacorps.
About the mandatory encryption and the performance: this will prevent ISP caching of static content. That would be bad news for, say, Steam and Debian, who use HTTP (not HTTPS [0][1]) to distribute content. (They verify integrity with secure hashing, of course.) I presume they'll decline to adopt HTTP/3.
Bit late to the party, I misunderstood the HTTP/3 but I am very excited for QUIC, and _not_ because of Google's illuminati spec but because I hope people will be interested in secure UDP.
I like datagrams so much more than an accept-listen-keepalive-blocking-foreverloop-callback-async-threaded-future hell that is TCP.
I could be wrong of course I need to read the spec too. Anything UDP makes me giddy.
> or even worse, your ISP-provided router or decade old middleware router on the Internet.
I can guarantee you that middleware will continue to exist. If they need to they'll force QUIC connections to terminate and switch to TLS 1.3. There's no way that companies will allow encrypted communications leaving their companies en-masse without being able to decrypt the content. Even more so for any totalitarian state governments that need to spy on their citizens..
> As the packet loss rate increases, HTTP/2 performs less and less good. At 2% packet loss (which is a terrible network quality, mind you), tests have proven that HTTP/1 users are usually better off - because they typically have six TCP connections up to distribute the lost packet over so for each lost packet the other connections without loss can still continue.
> Fixing this issue is not easy, if at all possible, to do with TCP.
Are there any resources to better understand _why_ this can't be resolved? If HTTP 1.1 performs better under poor network conditions, why can't we start using more concurrent TCP connections with HTTP 2 when it makes sense?
I'm a bit wary of this use of UDP when we've essentially re-implemented some of TCP on top, though I understand it's common in game networking.
>Are there any resources to better understand _why_ this can't be resolved?
The issue is TCP's design assumption around a single stream. You don't get any out of order packets but that also means you don't get any out of order packets, even if you want them. When you have multiple conceptual streams within a single TCP connections you actually just want the order maintained within those conceptual streams and not the whole TCP connection, but routers don't know that. If you can ignore this issue, http/2 is really nice because you're saving a lot of the overhead of spinning up and tearing down connections.
>If HTTP 1.1 performs better under poor network conditions, why can't we start using more concurrent TCP connections with HTTP 2 when it makes sense?
Because it performs worse under good conditions. TCP has no support for handing off what is effectively part of the connection into a new TCP connection.
TCP fix requires a lot of coordination. First you need microsecond time stamps. Then you need an RFC to reduce RTOmin below 200ms. Then you need ATO discovery and negotiation. A lot of moving parts and you end up with a protocol that’s still worse than QUIC. Also note that Linux maintainers have refused to accept patches for all of these things and QUIC is to some extent a social workaround for their intransigence.
> why can't we start using more concurrent TCP connections with HTTP 2 when it makes sense
Because using 6 TCP connections per site is a hack to have larger initial congestion windows, i.e. faster page loading, ending up using more bandwidth in retransmission instead of in goodput. Instead we could have more intelligent congestion control algorithms in one TCP connection to properly fill up the available bandwidth. See https://web.archive.org/web/20131113155029/https://insoucian... for a more detailed account (esp. the figure of "Etsy’s sharding causes so much congestion").
Several things that excite me about this protocol:
— UDP-based and different stream multiplexing such that packet loss on one stream doesn't hold up all the other streams.
— Fast handshakes, to start sending data faster.
— TLS 1.3 required, no more clear-text option.
Overall this has the potential to help with overall latency on the web, and that is something I am really looking forward to.
(Yes I'm aware that there are many steps that can be done today to reduce latency, but having this level of attention at the protocol level is also an improvement.)
The documentation says how in theory it could happen, but all actual client software just does ALPN, which is a TLS feature to let you pick a different sub-protocol after connecting. Since it's a TLS feature you are obliged to use encryption.
If we aren't using TCP anymore does that mean all network congestion tooling developed in the last 30 years are suddenly worthless and quality of service will degrade everywhere?
> [..] around 70% of all HTTPS requests Firefox issues get HTTP/2 [...]
Frequent use of Google probably puts this number on the higher end without revealing much information about general adaptation.
Personally, I am waiting for HTTP/5, since the speed for new protocol versions seems to be set on "suddenly very fast".
That said, I think HTTP/2 was a good add-on for the protocol.
On the other hand a lot of over-engineered protocols fail or are a giant pain to use. I think we will only see adaptation if there is a real tangible benefit to upgrade infrastructure.
Quic doesn't really convince me yet. It is certainly advantageous for some cases, but it isn't obvious to me. Yes, non-blocking parallel streaming connections are certainly great... 0-RTT? Hm, I don't think the speed advantages are worth the reduced security if used with a payload. Maybe for Google and similar services, but otherwise? Quic needs to re-implement TCPs error checking and puts these mechanism outside of the kernel space. Let's hope we don't see other shitty proprietary protocols that are "similar" to HTTP.
Really neat resource. Coming into this thread with next-to-no knowledge of HTTP/3, this was a great high-level overview of the motivation and resulting protocol.
I'm wondering if anyone with a little more knowledge could go deeper into what the difference is between "TLS messages" and "TLS records" as talked about in this[1] snippet:
> the working group also decided that [...] [QUIC] should only use "TLS messages" and not "TLS records" for the protocol
From my understanding quickly reading through the spec, it looks like HTTP/3 starts with a standard TLS handshake for key exchange, but then QUIC "crypto" frames are used to carry application-level data instead of TLS frames[2]. Is this accurate? If so, why define a new frame format? Just to be able to lump multiple frames into one packet[3]?
> From my understanding quickly reading through the spec, it looks like HTTP/3 starts with a standard TLS handshake for key exchange, but then QUIC "crypto" frames are used to carry application-level data instead of TLS frames[2]. Is this accurate?
Sort of, kinda, no? It's a "standard TLS handshake" from a cryptographic point of view, but the TLS standard specifies that all this data travels over TCP. QUIC doesn't use TCP, so for QUIC the same data is cut up differently and moved over QUIC's UDP channel. So, everything uses QUIC's frames, not just application data.
QUIC needs to solve a bunch of problems TCP already solved, plus the new problems, and chooses to do so in one place rather than split them and have an extra protocol layer. For example, "What do I do if some device duplicates a packet?" is solved in TCP, so TLS doesn't need to fix it. But QUIC needs to fix it. On the other hand, "What do I do if some middleman tries to close my connection to www.example.com?" is something TCP doesn't solve and neither does TLS but QUIC wants to, so again QUIC needs to fix it.
One reason to do all this in one place is that "it's encrypted" is often a very effective solution even when your problem isn't hostiles just idiots. For example maybe idiots drop all packets with the bytes that spell "CUNT" in them in some forlorn attempt to protect "the children". Ugh. Now nobody can mention the town of Scunthorpe! But wait, if we encrypt everything now the idiot filter will just drop an apparently random and vanishingly small proportion of packets, which we can live with. "I just randomly drop one entire packet for every 4 gigabytes transmitted" is still stupid, but now everything basically works again.
>The work on sending other protocols than HTTP over QUIC has been postponed to be worked on after QUIC version 1 has shipped.
I'm very interested in this bit. I'm working on a sensor network using M2M SIM cards which are billed for each 100kb. Being able to maintain an encrypted connection without having to handshake every time could have nice applications.
At first glance I don't think it's fair to say "ENet did this a decade ago". ENet simply provides multi channel communication over a UDP stream. It doesnot provide 0/1 RTT handshakes, encryption of the protocol beyond the initial handshake, or HTTP bindings. Based on some Github issues it doesn't even look like there was a protocol extension/version negotiation.
QUIC is also decently old itself, the last 7 years have been spent proving it is well suited for the real world and able to be iterated upon. This is the kind of difference that matters for standards track vs ignored.
Your comment made me wonder: Is the PR to add ipv6 support to ENet still open? Last times I checked was maybe 3 years ago. Seems it's still open: https://github.com/lsalzman/enet/pull/21
One thing I don't understand is, if it's encrypted, we'll never see hardware accelerated QUIC ?
I've read it's 2 to 3 times more CPU intensive, aren't we implicitly giving an artificial competitive advantage to the "Cloud" ? By the "Cloud" I mean big provider with like (obviously) Google, Cloudflare, Akamaï ...
That is raising the barrier of entry for newcomers, is it not ?
> A lot of enterprises, operators and organizations block or rate-limit UDP traffic
That was my first thought, and the following seem to be assuming that companies will decide to change their policy.
But many public WiFi block UDP traffic, are they going to change their policy? Are the people in charge of it even aware about it? (Think coffee shops, restaurants, hotels, ...)
Are we going to have websites supporting legacy protocols ("virtually forever") in order to build a highly available internet?
Also, ISPs in some countries have not been UDP-friendly. I'm thinking about China mainly, where UDP traffic if being throttled and often blocked (connection shutdown) if the volume of traffic is consequent - I assume they apply this policy to block fast VPNs.
Are they going to change their policy? Worst scenario here, would be to see a new http-like protocol coming out in China, resulting in an even larger segmentation of the internet
Working in a school I block QUIC traffic so my web filter can (attempt to) keep kids off porn. Such filtering is required by law for schools. I haven't found a passive filter that handles QUIC. I don't want to install invasive client software or MITM connections.
Disappointingly, out of all of the changes in HTTP/3, cookies are still present. It'd be nice if HTTP/4 weren't also a continuation of Google entrenching its tracking practices into the Web's structure and protocols.
TLS 1.3 _being required_ makes me sigh loudly. What about local development, where tools like tcpdump and wireshark are really handy? What about air gapped systems? What about devices that are power constrained?
It's not that I think an encrypted web is bad, it's a very good thing. I am just spooked by tying a a text transfer protocol to a TCP system.
Any data on HTTP/3 performance? I don't see it in the book. There's the general claim that it's faster/lower latency, but there are no numbers behind that claim -- last time I checked QUIC's performance benefits were incredibly slight.
The book says 7% of all internet traffic already uses QUIC (HTTP/3) and Chrome has long implemented Google’s version of it.
But this book isn’t about concerning yourself with using it or implementing it, it’s about understanding what the future holds, how it works, and what roadblocks lie ahead.
The lack of API support in OpenSSL for it’s TLS requirements and poor optimization for heavy UDP traffic loads on Linux et al (they say it doubles CPU vs HTTP/2 for the same traffic) sounds like it’s going to be a major hurdle for widespread adoption any time soon.
Well, Google is in control of Chrome as well as the two most visited websites in the world. They can dream up any protocol they want, implement it in Chrome and use it for its sites, at any pace they desire. This is basically what happened with SPDY (now HTTP 2.0) and the upcoming HTTP 3.0 as well.
IMO it's positive. We are getting free new stuff, and I actually prefer to have two incremental steps, where HTTP 2.0 still uses TCP, giving stuff like multiplexing and pipelining, and HTTP 3.0 uses a novel UDP based transport layer protocol, improving stuff further.
There is objectionable stuff like the recent manifest 3.0 version to make ad blockers crappier but this is not one of the objectionable things imo.
Happy for everybody, but since it only really delivers benefits in less than 2% of use cases (those with crappy connections) I personally can't wait to have it be as quickly implemented and supported as ipv6 was.
It's sad that the site doesn't work without javascript. We had this exact navigation working with iframes 20 years ago. And I could resize the TOC on the left back then.
Hey, javascript is fundamental to web today, unlike 20 years ago. Even if a site like this definitely wouldn't need javascript since it's so simple, there really isn't much of a trade-of since less than one percent of all visitors are likely to have javascript disabled.
There is a way to make an iframe in the body of the page change by clicking the navigation bar in the main page without reloading the main page/navigation bar?
I'm still reading through the article, but I have to say that I'm pleasantly surprised by QUIC and HTTP/3. I first learned socket programming around the fall of 1998 (give or take a year) in order to write a game networking layer:
Here are just a few of the immediately obvious flaws I found:
* The UDP checksum is only 16 bits, when it should have been 32 or arbitrary
* The UDP header is far too large, using/wasting something like 28 bytes (I'm drawing that from memory) when it only needed about 12 bytes to represent source ip, source port, destination ip, destination port
* TCP is a separate protocol from UDP, when it should have been a layer over it (this was probably done in the name of efficiency, before computers were fast enough to compress packet headers)
* Secure protocols like TLS and SSL needed several handshakes to begin sending data, when they should have started sending encrypted data immediately while working on keys
* Nagle's algorithm imposed rather arbitrary delays (WAN has different load balancing requirements than LAN)
* NAT has numerous flaws and optional implementation requirements so some routers don't even handle it properly (and Microsoft's UPnP is an incomplete technique for NAT-busting because it can't handle nested networks, Apple's Bonjour has similar problems, making this an open problem)
* TCP is connection oriented so your stream dropped by doing something as simple as changing networks (WIFI broke a lot of things by the early 2000s)
There's probably more I'm forgetting. But I want to stress that these were immediately obvious for me, even then. What I really needed was something like:
* State transfer (TCP would have probably been more useful as a message-oriented stream, this is also an issue with UNIX sockets, for example, could be used to implement a software-transactional-memory or STM)
* One-shot delivery (UDP is a stand in for this, I can't remember the name of it, but basically unreliable packets have a wrapping sequence number so newer packets flush older packets in the queue so that latency-sensitive things like shooting in games can be implemented)
* Token address (the peers should have their own UUID or similar that remain "connected" even after network changes)
* Separately-negotiated encryption (we should be able to skip the negotiation part on any stream if we already have the keys)
Right now the only protocol I'm aware of that comes close to fixing even a handful of these is WebRTC. I find it really sad that more of an effort wasn't made in the beginning to do the above bullet points properly. But in fairness, TCP/IP was mostly used for business, which had different requirements like firewalls. I also find it sad that insecurities in Microsoft's (and early Linux) network stacks led to the "deny all by default" firewalling which lead to NAT, relegating all of us to second class netizens. So I applaud Google's (and others') efforts here, but it demonstrates how deeply rooted some of these flaws were that only billion dollar corporations have the R&D budgets to repair such damage.
Yeah, it really sucks that the developers of TCP didn't foresee these issues in 1981 when they first designed it. I can't believe they were so short-sighted.
Okay, enough with the sarcasm. Is it too much to ask for historical perspective in protocol design?
> The QUIC working group that was established to standardize the protocol within the IETF quickly decided that the QUIC protocol should be able to transfer other protocols than "just" HTTP.
> ...
> The working group did however soon decide that in order to get the proper focus and ability to deliver QUIC version 1 on time, it would focus on delivering HTTP, leaving non-HTTP transports to later work.
[+] [-] 1_player|7 years ago|reply
Personally I am _very_ excited by HTTP/3 (and QUIC), it feels like the building block for Internet 2.0, with connection migration over different IPs, mandatory encryption, bidirectional streams and it being a user-space library – sure, more bloat, but from now on we won't have to wait for your kernel to support feature X, or even worse, your ISP-provided router or decade old middleware router on the Internet.
I haven't had the chance to read the actual spec yet, but it's obvious that while the current tech (HTTP2) is an improvement over what we had before, HTTP/3 is a good base to make the web even faster and more secure.
HTTP/3 won't be IPv6: it only requires support from the two parties that benefit from it the most: browser vendors and web server vendors. We won't have to wait on the whole internet to upgrade their hardware.
[+] [-] jlouis|7 years ago|reply
I'm worried because you have a protocol implemented in the userland for a few mainstream languages. It seems everyone now has to pay the price of a protocol implementation on top of a protocol implementation on top of a protocol implementation. Big players---either because they have thousands of open source developers, or is backed by a corporation---they have it easy. Smaller players? Not so much.
Also, note that the exact problem that HTTP/3 tries to solve was known in the design process of HTTP/2 and some people even noted having multiple flow control schemes at multiple layers would become a problem. We are letting the same people design the next layer, and probably too fast in the name of time to market.
This should definitely live in a way people can make use of it easily, with an API highly amenable to binding. If it gains traction, we need a new UDP interface to the kernel as well, for batching packets back and forth. This kills operating system diversity as well, or runs the risk at doing so.
OTOH, I see the lure: SCTP never caught on for a reason, and much of this is the opposite of my above worries.
[+] [-] jimmaswell|7 years ago|reply
After that point it's been becoming more and more sterilized. My web apps that automatically played some sound aren't going to work anymore without some obnoxious "click here to begin" screen that doesn't fit in with the content. No more plugins letting us extend our browsers in new ways (what a convenient "coincidence" for Google that this gives them more control over what the user gets to do and makes tracking what goes on easier). I have to give Reddit Enhancement Suite permission every single time it tries to show a preview from a domain it hasn't previously done so from before. It's all suffocating. HTML5 makes up for some of the lost capability but it's not enough and what parts of HTML5 are going to actually work are basically at the whim of Google now.
But at least HTTP/3 will let us load buzzfeed listicles a few milliseconds faster, so there's that.
[+] [-] tialaramex|7 years ago|reply
If you're a large organisation you can move to IPv6 "today". What you do is, internally you cease buying IPv4-only gear and using IPv4 addressing etcetera. Everything inside is purely IPv6. A lot of your networking gets simpler when you do this, and debugging is a LOT smoother because there's no more "Huh 10.0.0.1, could be _anything_" everything has globally unique addresses because it's not crammed into this tiny 32-bit space.
At the edge, you have protocol translators to get from IPv6 (which all your internal stuff users) to IPv4 (which some things on the Internet use) but you probably already had a bunch of gear at the edge anyway, to implement corporate policies like "No surfing for porn at work" and "Nobody from outside should be connecting to port 22 on our machines!".
This isn't really practical for "One man and an AWS account" type businesses where your "Internet access" is a Comcast account and an iPhone, but if you're big enough to actually have an IT department, suggest they look into it. It may be cheaper and simpler than they'd realised.
[+] [-] simias|7 years ago|reply
I mean, if you can easily update whatever userland library you're using, why can't you upgrade your OS? If the library is easy to upgrade it means that it uses a well defined and backward-compatible interface. What do you get by shifting everything one layer up? In the end it's just software, there's not really any reason why upgrading a kernel driver should be any harder than upgrading a .so/.dll.
So the logic is "kernels are too slow to update and integrate the last new standards, so let's just move everything one step up because browsers auto-update"? Except that there's no technical reason for that, on my Linux box my browser and my kernel are updated at the same time when I run "apt-get upgrade" or "pacman -Syu" or whatever applies. The kernel I'm using at the moment has been built less than a week ago.
So if the problem is that Windows sucks balls and as a result people end up effectively re-creating an operating system on top of it to work around that, then yeah, from a practical standpoint I get it but I'm definitely not "_very_ excited" about it. It's a rather ugly hack.
If in general if the question is "who do you trust more to select and implement new internet standards, kernel developers or web developers?" then I take a side-glance at the few GBs used by my web browser to display a handful of static pages at the moment and I know the answer as far as I'm concerned...
So yeah, it might make sense, but I still think it just goes to show what a shitshow modern software development has become. Instead of fixing things we just add a new layer on top and we rationalize that it's better that way.
[+] [-] marcosdumay|7 years ago|reply
Instead we now have transport layers that are application specific and 3 completely different web protocols with none of them being considered legacy, 2 of them being complex enough that people aren't very willing to move.
That does not look like a good foundation for anything.
[+] [-] zzzcpan|7 years ago|reply
[+] [-] MaxBarraclough|7 years ago|reply
[0] https://developer.valvesoftware.com/wiki/SteamPipe#LAN_Cachi...
[1] https://whydoesaptnotusehttps.com/
[+] [-] jjtheblunt|7 years ago|reply
[+] [-] smittywerben|7 years ago|reply
I like datagrams so much more than an accept-listen-keepalive-blocking-foreverloop-callback-async-threaded-future hell that is TCP.
I could be wrong of course I need to read the spec too. Anything UDP makes me giddy.
[+] [-] mlindner|7 years ago|reply
I can guarantee you that middleware will continue to exist. If they need to they'll force QUIC connections to terminate and switch to TLS 1.3. There's no way that companies will allow encrypted communications leaving their companies en-masse without being able to decrypt the content. Even more so for any totalitarian state governments that need to spy on their citizens..
[+] [-] lol768|7 years ago|reply
> Fixing this issue is not easy, if at all possible, to do with TCP.
Are there any resources to better understand _why_ this can't be resolved? If HTTP 1.1 performs better under poor network conditions, why can't we start using more concurrent TCP connections with HTTP 2 when it makes sense?
I'm a bit wary of this use of UDP when we've essentially re-implemented some of TCP on top, though I understand it's common in game networking.
[+] [-] jayd16|7 years ago|reply
The issue is TCP's design assumption around a single stream. You don't get any out of order packets but that also means you don't get any out of order packets, even if you want them. When you have multiple conceptual streams within a single TCP connections you actually just want the order maintained within those conceptual streams and not the whole TCP connection, but routers don't know that. If you can ignore this issue, http/2 is really nice because you're saving a lot of the overhead of spinning up and tearing down connections.
>If HTTP 1.1 performs better under poor network conditions, why can't we start using more concurrent TCP connections with HTTP 2 when it makes sense?
Because it performs worse under good conditions. TCP has no support for handing off what is effectively part of the connection into a new TCP connection.
And QUIC essentially _is_ your suggestion.
[+] [-] shereadsthenews|7 years ago|reply
[+] [-] jandrese|7 years ago|reply
An example of this is SCPS-TP.
https://en.wikipedia.org/wiki/Space_Communications_Protocol_...
[+] [-] xfs|7 years ago|reply
Because using 6 TCP connections per site is a hack to have larger initial congestion windows, i.e. faster page loading, ending up using more bandwidth in retransmission instead of in goodput. Instead we could have more intelligent congestion control algorithms in one TCP connection to properly fill up the available bandwidth. See https://web.archive.org/web/20131113155029/https://insoucian... for a more detailed account (esp. the figure of "Etsy’s sharding causes so much congestion").
[+] [-] unknown|7 years ago|reply
[deleted]
[+] [-] athenot|7 years ago|reply
— UDP-based and different stream multiplexing such that packet loss on one stream doesn't hold up all the other streams.
— Fast handshakes, to start sending data faster.
— TLS 1.3 required, no more clear-text option.
Overall this has the potential to help with overall latency on the web, and that is something I am really looking forward to.
(Yes I'm aware that there are many steps that can be done today to reduce latency, but having this level of attention at the protocol level is also an improvement.)
[+] [-] tialaramex|7 years ago|reply
The documentation says how in theory it could happen, but all actual client software just does ALPN, which is a TLS feature to let you pick a different sub-protocol after connecting. Since it's a TLS feature you are obliged to use encryption.
[+] [-] fiatjaf|7 years ago|reply
[+] [-] raxxorrax|7 years ago|reply
Frequent use of Google probably puts this number on the higher end without revealing much information about general adaptation.
Personally, I am waiting for HTTP/5, since the speed for new protocol versions seems to be set on "suddenly very fast".
That said, I think HTTP/2 was a good add-on for the protocol.
On the other hand a lot of over-engineered protocols fail or are a giant pain to use. I think we will only see adaptation if there is a real tangible benefit to upgrade infrastructure.
Quic doesn't really convince me yet. It is certainly advantageous for some cases, but it isn't obvious to me. Yes, non-blocking parallel streaming connections are certainly great... 0-RTT? Hm, I don't think the speed advantages are worth the reduced security if used with a payload. Maybe for Google and similar services, but otherwise? Quic needs to re-implement TCPs error checking and puts these mechanism outside of the kernel space. Let's hope we don't see other shitty proprietary protocols that are "similar" to HTTP.
(I am no web- or network-developer)
[+] [-] lazulicurio|7 years ago|reply
I'm wondering if anyone with a little more knowledge could go deeper into what the difference is between "TLS messages" and "TLS records" as talked about in this[1] snippet:
> the working group also decided that [...] [QUIC] should only use "TLS messages" and not "TLS records" for the protocol
From my understanding quickly reading through the spec, it looks like HTTP/3 starts with a standard TLS handshake for key exchange, but then QUIC "crypto" frames are used to carry application-level data instead of TLS frames[2]. Is this accurate? If so, why define a new frame format? Just to be able to lump multiple frames into one packet[3]?
[1] https://http3-explained.haxx.se/en/proc-status.html
[2] https://en.wikipedia.org/wiki/Transport_Layer_Security#TLS_r...
[3] https://tools.ietf.org/html/draft-ietf-quic-tls-18#page-8
[+] [-] tialaramex|7 years ago|reply
Sort of, kinda, no? It's a "standard TLS handshake" from a cryptographic point of view, but the TLS standard specifies that all this data travels over TCP. QUIC doesn't use TCP, so for QUIC the same data is cut up differently and moved over QUIC's UDP channel. So, everything uses QUIC's frames, not just application data.
QUIC needs to solve a bunch of problems TCP already solved, plus the new problems, and chooses to do so in one place rather than split them and have an extra protocol layer. For example, "What do I do if some device duplicates a packet?" is solved in TCP, so TLS doesn't need to fix it. But QUIC needs to fix it. On the other hand, "What do I do if some middleman tries to close my connection to www.example.com?" is something TCP doesn't solve and neither does TLS but QUIC wants to, so again QUIC needs to fix it.
One reason to do all this in one place is that "it's encrypted" is often a very effective solution even when your problem isn't hostiles just idiots. For example maybe idiots drop all packets with the bytes that spell "CUNT" in them in some forlorn attempt to protect "the children". Ugh. Now nobody can mention the town of Scunthorpe! But wait, if we encrypt everything now the idiot filter will just drop an apparently random and vanishingly small proportion of packets, which we can live with. "I just randomly drop one entire packet for every 4 gigabytes transmitted" is still stupid, but now everything basically works again.
[+] [-] jorrizza|7 years ago|reply
https://fosdem.org/2019/schedule/event/http3/
[+] [-] ttsda|7 years ago|reply
>The work on sending other protocols than HTTP over QUIC has been postponed to be worked on after QUIC version 1 has shipped.
I'm very interested in this bit. I'm working on a sensor network using M2M SIM cards which are billed for each 100kb. Being able to maintain an encrypted connection without having to handshake every time could have nice applications.
[+] [-] nitrix|7 years ago|reply
[+] [-] zamadatix|7 years ago|reply
QUIC is also decently old itself, the last 7 years have been spent proving it is well suited for the real world and able to be iterated upon. This is the kind of difference that matters for standards track vs ignored.
[+] [-] est31|7 years ago|reply
[+] [-] ldng|7 years ago|reply
I've read it's 2 to 3 times more CPU intensive, aren't we implicitly giving an artificial competitive advantage to the "Cloud" ? By the "Cloud" I mean big provider with like (obviously) Google, Cloudflare, Akamaï ...
That is raising the barrier of entry for newcomers, is it not ?
Isn't TCP already versioned ?
[+] [-] IshKebab|7 years ago|reply
Pretty sure it stands for Quick UDP Internet Connections.
https://lwn.net/Articles/558826/
[+] [-] tbronchain|7 years ago|reply
That was my first thought, and the following seem to be assuming that companies will decide to change their policy. But many public WiFi block UDP traffic, are they going to change their policy? Are the people in charge of it even aware about it? (Think coffee shops, restaurants, hotels, ...) Are we going to have websites supporting legacy protocols ("virtually forever") in order to build a highly available internet?
Also, ISPs in some countries have not been UDP-friendly. I'm thinking about China mainly, where UDP traffic if being throttled and often blocked (connection shutdown) if the volume of traffic is consequent - I assume they apply this policy to block fast VPNs. Are they going to change their policy? Worst scenario here, would be to see a new http-like protocol coming out in China, resulting in an even larger segmentation of the internet
[+] [-] discreditable|7 years ago|reply
[+] [-] DaiHafVaho|7 years ago|reply
Disappointingly, out of all of the changes in HTTP/3, cookies are still present. It'd be nice if HTTP/4 weren't also a continuation of Google entrenching its tracking practices into the Web's structure and protocols.
[+] [-] exabrial|7 years ago|reply
It's not that I think an encrypted web is bad, it's a very good thing. I am just spooked by tying a a text transfer protocol to a TCP system.
[+] [-] Solar19|7 years ago|reply
[+] [-] mscasts|7 years ago|reply
[+] [-] dmix|7 years ago|reply
But this book isn’t about concerning yourself with using it or implementing it, it’s about understanding what the future holds, how it works, and what roadblocks lie ahead.
The lack of API support in OpenSSL for it’s TLS requirements and poor optimization for heavy UDP traffic loads on Linux et al (they say it doubles CPU vs HTTP/2 for the same traffic) sounds like it’s going to be a major hurdle for widespread adoption any time soon.
[+] [-] est31|7 years ago|reply
IMO it's positive. We are getting free new stuff, and I actually prefer to have two incremental steps, where HTTP 2.0 still uses TCP, giving stuff like multiplexing and pipelining, and HTTP 3.0 uses a novel UDP based transport layer protocol, improving stuff further.
There is objectionable stuff like the recent manifest 3.0 version to make ad blockers crappier but this is not one of the objectionable things imo.
[+] [-] andy_ppp|7 years ago|reply
[+] [-] all_blue_chucks|7 years ago|reply
[+] [-] BorRagnarok|7 years ago|reply
[+] [-] Asooka|7 years ago|reply
[+] [-] cupofjoakim|7 years ago|reply
[+] [-] 3xblah|7 years ago|reply
[+] [-] zamadatix|7 years ago|reply
[+] [-] unknown|7 years ago|reply
[deleted]
[+] [-] zackmorris|7 years ago|reply
https://beej.us/guide/bgnet/
Here are just a few of the immediately obvious flaws I found:
* The UDP checksum is only 16 bits, when it should have been 32 or arbitrary
* The UDP header is far too large, using/wasting something like 28 bytes (I'm drawing that from memory) when it only needed about 12 bytes to represent source ip, source port, destination ip, destination port
* TCP is a separate protocol from UDP, when it should have been a layer over it (this was probably done in the name of efficiency, before computers were fast enough to compress packet headers)
* Secure protocols like TLS and SSL needed several handshakes to begin sending data, when they should have started sending encrypted data immediately while working on keys
* Nagle's algorithm imposed rather arbitrary delays (WAN has different load balancing requirements than LAN)
* NAT has numerous flaws and optional implementation requirements so some routers don't even handle it properly (and Microsoft's UPnP is an incomplete technique for NAT-busting because it can't handle nested networks, Apple's Bonjour has similar problems, making this an open problem)
* TCP is connection oriented so your stream dropped by doing something as simple as changing networks (WIFI broke a lot of things by the early 2000s)
There's probably more I'm forgetting. But I want to stress that these were immediately obvious for me, even then. What I really needed was something like:
* State transfer (TCP would have probably been more useful as a message-oriented stream, this is also an issue with UNIX sockets, for example, could be used to implement a software-transactional-memory or STM)
* One-shot delivery (UDP is a stand in for this, I can't remember the name of it, but basically unreliable packets have a wrapping sequence number so newer packets flush older packets in the queue so that latency-sensitive things like shooting in games can be implemented)
* Token address (the peers should have their own UUID or similar that remain "connected" even after network changes)
* Separately-negotiated encryption (we should be able to skip the negotiation part on any stream if we already have the keys)
Right now the only protocol I'm aware of that comes close to fixing even a handful of these is WebRTC. I find it really sad that more of an effort wasn't made in the beginning to do the above bullet points properly. But in fairness, TCP/IP was mostly used for business, which had different requirements like firewalls. I also find it sad that insecurities in Microsoft's (and early Linux) network stacks led to the "deny all by default" firewalling which lead to NAT, relegating all of us to second class netizens. So I applaud Google's (and others') efforts here, but it demonstrates how deeply rooted some of these flaws were that only billion dollar corporations have the R&D budgets to repair such damage.
[+] [-] spc476|7 years ago|reply
Okay, enough with the sarcasm. Is it too much to ask for historical perspective in protocol design?
[+] [-] the_other_guy|7 years ago|reply
1. is QUIC only for HTTP3 or can be generalized for any TCP-based L7 protocol but over TLS/UDP?
2. How is Websockets dealt with in HTTP3?
[+] [-] pacificmint|7 years ago|reply
> The QUIC working group that was established to standardize the protocol within the IETF quickly decided that the QUIC protocol should be able to transfer other protocols than "just" HTTP.
> ...
> The working group did however soon decide that in order to get the proper focus and ability to deliver QUIC version 1 on time, it would focus on delivering HTTP, leaving non-HTTP transports to later work.