> When a protocol can’t evolve because deployments ‘freeze’ its extensibility points, we say it has ossified. TCP itself is a severe example of ossification; so many middleboxes do so many things to TCP — whether it’s blocking packets with TCP options that aren’t recognized, or ‘optimizing’ congestion control.
> It’s necessary to prevent ossification, to ensure that protocols can evolve to meet the needs of the Internet in the future; otherwise, it would be a ‘tragedy of the commons’ where the actions of some individual networks — although well-intended — would affect the health of the Internet overall.
On the other hand, I've done a fair bit of work getting TCP based applications to behave properly over high latency high congestion (satellite or radio usually) links and QUIC makes me nervous. In the old days you could put a TCP proxy like SCPS in there and most apps would get an acceptable level of performance, but now I'm not so sure. It seems like everybody assumes you're on a big fat broadband pipe now and nobody else matters.
I significantly benefit from QUIC. My home network is exceptionally lossy... and exceptionally latent. ICMP pings range from 500ms (at best) to 10 seconds (at worst) with an average somewhere in the 1-2 second range. Additionally I am QOS-ed by some intermediary routers which appear to have a really bad (or busy) packet scheduler.
Often, Google sites serving via QUIC are the only sites I can load. I can load HTML5 YouTube videos despite not being able to open the linkedin.com homepage. Stability for loading HTTP over QUIC in my experience is very comparable to loading HTTP over OpenVPN (using UDP) with a larger buffer.
> It seems like everybody assumes you're on a big fat broadband pipe now and nobody else matters.
This is intentional. The powers that be have an interest in moving everyone to faster networks, and they effectively control all new web standards, and so build their protocols to force the apps to require faster, bigger pipes. This way they are never to blame for the new requirements, yet they get the intended benefits of fatter pipes: the ability to shove more crap down your throat.
It's possible to cache static binary assets using encrypted connections, but I am not aware of a single RFC that seriously suggests its adoption. It is also to the advantage of the powers that be (who have the financial means to do this) to simply move content distribution closer to the users. As the powers that be don't provide internet services over satellite or microwave, they do not consider them when defining the standards.
You see the same thing in simple data consumption patterns. It's become normal for your average app to suck down tens or even hundreds of megabytes a month, even if it barely does anything.
It's so normalized I figured there wasn't a whole lot I could do about it, until I noticed Nine sync'ing my Exchange inbox from scratch for something like 3MB. Then I noticed Telegram had used 3MB for a month's regular use, while Hangouts had used 10MB for five minutes of use.
Despite living in the first world, I'm kind of excited for Android's Building for Billions. There's so much flagrant waste of traffic today, assuming as you say that you have a big fat broadband pipe, with no thought to tightly metered, high latency, or degraded connections.
(I switched to an inexpensive mobile plan with 500MB, you see)
There’s little real impetus to change widely-used protocols. Job security, product/support sales and developer “consumerism” novelty aren’t valid reasons, but often get pushed with spurious “reasons” to fulfill agendas, despite their cost.
The only demonstrable needs are bugfixes and significant advances because inventing and imposing complexity in all implementations or damaging backwards compatibility are insanely costly in terms of retooling, redeployment, customer interruptions, advertising, learning curve, interop bugs and attack surfaces.
I work in the app / WAN optimisation space, and periodically do some work with SCPS -- I'm guessing you're referring to the tweaks around buffers / BDP, aggressiveness around error recovery and congestion response?
I think there'll be a few answers to the problem - first, it'll be slow to impact many of the kinds of users currently using Satellite, second for web apps it's either internal (so they can define transport) or external (in which case they are, or can, use a proxy and continue tcp/http over the sat links.
Later on I expect we'll get gateways (in the traditional sense) to sit near the dish ... though I also would expect that on that timeframe you'll be seeing improvements in application delivery.
Ultimately I hope - hubris, perhaps - that the underlying problem (most of our current application performance issues are because apps have been written by people that either don't understand networks, or have a very specific, narrow set of expectations) will be resolved. (Wrangling CIFS over satellite - f.e.)
To what extent are the TCP problems you can solve by tweaking manually with a proxy the same problems that QUIC solves automatically? If there's a big overlap, it may not be a real problem.
For example, you mention latency, but QUIC is supposed to remove unnecessary ordering constraints, which could eliminate round trips and help with latency.
Another interesting protocol, perhaps underused, is SCTP. It fixes many issues with TCP, in particular it has reliable datagrams with multiplexing and avoiding head-of-line blocking. I believe QUIC is supposed to be faster at connection (re)estabishment.
SCTP is a superior protocol, but it isn't implemented in many routers or firewalls. As long as Comcast / Verizon routers don't support it, no one will use it.
It may be built on top of IP, but TCP / UDP levels are important for NAT and such. Too few people use DMZ and other features of routers / firewalls. Its way easier to just put up with TCP / UDP issues to stay compatible with most home setups.
It seems that widely deploying TLS 1.3 and DOH can provide an effective technical end-around the dismantling of net neutrality. So we should be promoting and trying to deploy them as widely as possible.
Of course, they can still block or throttle by IP, so the next step is to increase deployment of content-centric networking systems.
It seems to me that all of the changes described in this story will contribute to thwarting intermediaries and their agendas. HTTP/2 and its "effective" encryption requirement are proof against things like Comcast's nasty JavaScript injection[1]. QUIC has mandatory encryption all the way down; even ACKs are encrypted, obviating some of the traditional throttling techniques. And as you say TLS 1.3 and DOH further protect traffic from analysis and manipulation by middlemen.
Perhaps our best weapon against Internet rent seekers and spooks is technical innovation.
It is astonishing to me that Google can invent QUIC, deploy it on their network+Chrome and boom! 7% of all Internet traffic is QUIC.
Traditional HTTP over TCP and traditional DNS are becoming a ghetto protocol stack; analysis of such traffic is sold to who knows whom, the content is subject to manipulation by your ISP, throttling is trivial and likely to become commonplace with Ajit Pai et al. Time to pull the plug on these grifters and protect all the traffic.
> It seems that widely deploying TLS 1.3 and DOH can provide an effective technical end-around the dismantling of net neutrality.
If you don't think about it, it may seem that way. But until everyone sends all their data over tor, or some other system that obscures which IP you're trying to get to, it's still easy to filter.
There's (within epsilon of) zero motion I've seen towards obscuring IP addresses, for good reason.
Unfortunately not really; Net Neutrality mostly focuses around the semi-bigger services who in most cases will have at least one of a dedicated AS number; dedicated IP ranges or dedicated physical network links they can limit the capacity of. Which is traditionally how the game has been played.
Think Netflix/Comcast.. no hiding what that traffic is.
Let's just hope that future innovations (and, more perniciously, "innovations") reinforce the end-to-end principle. A major weakness of the 2017 Internet is its centralization.
The DNS-over-http discussion in this post mention that in passing, though I wonder if this treatment might not be worse than the disease.
The DOH example, in particular, only conveys it's benefits if centralized to something governments are hesitant to block. This is an example of "innovation" specifically designed to centralize. There's maybe a handful of companies with enough influence that countries would hesitate to block in order to block DOH.
This is just depressing. Sure, sell us out to big corporations by not implementing proper features in protocols like HTTP/2 so we can get tracked for decades to come. Yet, represent freedom by yet another cool way to "fool" governments. When historians look back at what happened to the Internet, or even society, they are going to find that organizations like the IETF was to busy with romantic dreams of their own greatness to serve the public. It's like people leaned nothing from Snowden.
> Finally, we are in the midst of a shift towards more use of encryption on the Internet, first spurred by Edward Snowden’s revelations in 2015.
Personally, I'd say it was first spurred by Firesheep back in 2010, but the idea of encrypting all websites, even content-only websites may have been Snowden related.
I'm really struck by how hostile to enterprise security these proposals are. Yes, I know that the security folks will adapt (they'll have to), but it still feels like there's a lot of baby+bathwater throwing going on.
DNS over HTTP is a prime example: blocking outbound DNS for all but a few resolvers, and monitoring the hell out of the traffic on those resolvers is a big win for enterprise networks. What the RFC calls hostile "spoofing" of DNS responses enterprise defenders call "sinkholing" of malicious domains. Rather than trying to add a layer of validation to DNS to provide the end user with assurance that the DNS request they got really is the name they asked for (and, in theory, allow the enterprise to add their own key to sign sinkhole answers) instead DOH just throws the whole thing out...basically telling enterprise defenders "fuck your security controls, we hate Comcast too much to allow anyone to rewrite DNS answers."
"Fuck your security controls, we hate Comcast" is, I think, a bad philosophy for internet-wide protocols. (That's basically what the TLS 1.3 argument boils down to also...and that's a shame.)
As implemented, all these "enterprise security" things are mostly indistinguishable from malicious attacks. Of course they break when you start tightening security.
Forging DNS responses is a horrible idea (and already breaks with DNSSEC). I have a hard time to comprehend how this can be considered a reasonable security measure.
I'm pretty excited about DNS over TLS. Ahaha no, that's so tacky, I meant DNS over QUIC of course. Sorry, I meant iQUIC. Ah no, it's not even there, but it will suck compared to DOH, DNS over HTTPS.
> For example, if Google was to deploy its public DNS service over DOH on www.google.com and a user configures their browser to use it, a network that wants (or is required) to stop it would have to effectively block all of Google (thanks to how they host their services).
Which will result in all of Google being blocked by schools, businesses, and entire nations. Which, as Google is relied upon more and more, means less access to things like mail, documents, news, messaging, video content, the Android platform, etc.
Nah, many of them can't -- won't -- block Google over this.
A huge number of them are absolutely reliant on Google, for things like (org-wide) Google Mail, Google Docs, ChromeBook deployments, and so on -- not to mention basic Google search.
[+] [-] jandrese|8 years ago|reply
> It’s necessary to prevent ossification, to ensure that protocols can evolve to meet the needs of the Internet in the future; otherwise, it would be a ‘tragedy of the commons’ where the actions of some individual networks — although well-intended — would affect the health of the Internet overall.
On the other hand, I've done a fair bit of work getting TCP based applications to behave properly over high latency high congestion (satellite or radio usually) links and QUIC makes me nervous. In the old days you could put a TCP proxy like SCPS in there and most apps would get an acceptable level of performance, but now I'm not so sure. It seems like everybody assumes you're on a big fat broadband pipe now and nobody else matters.
[+] [-] cjhanks|8 years ago|reply
Often, Google sites serving via QUIC are the only sites I can load. I can load HTML5 YouTube videos despite not being able to open the linkedin.com homepage. Stability for loading HTTP over QUIC in my experience is very comparable to loading HTTP over OpenVPN (using UDP) with a larger buffer.
[+] [-] peterwwillis|8 years ago|reply
This is intentional. The powers that be have an interest in moving everyone to faster networks, and they effectively control all new web standards, and so build their protocols to force the apps to require faster, bigger pipes. This way they are never to blame for the new requirements, yet they get the intended benefits of fatter pipes: the ability to shove more crap down your throat.
It's possible to cache static binary assets using encrypted connections, but I am not aware of a single RFC that seriously suggests its adoption. It is also to the advantage of the powers that be (who have the financial means to do this) to simply move content distribution closer to the users. As the powers that be don't provide internet services over satellite or microwave, they do not consider them when defining the standards.
[+] [-] sliverstorm|8 years ago|reply
It's so normalized I figured there wasn't a whole lot I could do about it, until I noticed Nine sync'ing my Exchange inbox from scratch for something like 3MB. Then I noticed Telegram had used 3MB for a month's regular use, while Hangouts had used 10MB for five minutes of use.
Despite living in the first world, I'm kind of excited for Android's Building for Billions. There's so much flagrant waste of traffic today, assuming as you say that you have a big fat broadband pipe, with no thought to tightly metered, high latency, or degraded connections.
(I switched to an inexpensive mobile plan with 500MB, you see)
[+] [-] himom|8 years ago|reply
The only demonstrable needs are bugfixes and significant advances because inventing and imposing complexity in all implementations or damaging backwards compatibility are insanely costly in terms of retooling, redeployment, customer interruptions, advertising, learning curve, interop bugs and attack surfaces.
[+] [-] Jedd|8 years ago|reply
I think there'll be a few answers to the problem - first, it'll be slow to impact many of the kinds of users currently using Satellite, second for web apps it's either internal (so they can define transport) or external (in which case they are, or can, use a proxy and continue tcp/http over the sat links.
Later on I expect we'll get gateways (in the traditional sense) to sit near the dish ... though I also would expect that on that timeframe you'll be seeing improvements in application delivery.
Ultimately I hope - hubris, perhaps - that the underlying problem (most of our current application performance issues are because apps have been written by people that either don't understand networks, or have a very specific, narrow set of expectations) will be resolved. (Wrangling CIFS over satellite - f.e.)
[+] [-] adrianmonk|8 years ago|reply
For example, you mention latency, but QUIC is supposed to remove unnecessary ordering constraints, which could eliminate round trips and help with latency.
[+] [-] shalabhc|8 years ago|reply
https://en.wikipedia.org/wiki/Stream_Control_Transmission_Pr...
[+] [-] dragontamer|8 years ago|reply
It may be built on top of IP, but TCP / UDP levels are important for NAT and such. Too few people use DMZ and other features of routers / firewalls. Its way easier to just put up with TCP / UDP issues to stay compatible with most home setups.
[+] [-] IshKebab|8 years ago|reply
[+] [-] ilaksh|8 years ago|reply
Of course, they can still block or throttle by IP, so the next step is to increase deployment of content-centric networking systems.
[+] [-] topspin|8 years ago|reply
Perhaps our best weapon against Internet rent seekers and spooks is technical innovation.
It is astonishing to me that Google can invent QUIC, deploy it on their network+Chrome and boom! 7% of all Internet traffic is QUIC.
Traditional HTTP over TCP and traditional DNS are becoming a ghetto protocol stack; analysis of such traffic is sold to who knows whom, the content is subject to manipulation by your ISP, throttling is trivial and likely to become commonplace with Ajit Pai et al. Time to pull the plug on these grifters and protect all the traffic.
[1] https://news.ycombinator.com/item?id=15890551
[+] [-] ori_b|8 years ago|reply
If you don't think about it, it may seem that way. But until everyone sends all their data over tor, or some other system that obscures which IP you're trying to get to, it's still easy to filter.
There's (within epsilon of) zero motion I've seen towards obscuring IP addresses, for good reason.
[+] [-] jstanley|8 years ago|reply
[+] [-] lathiat|8 years ago|reply
Think Netflix/Comcast.. no hiding what that traffic is.
[+] [-] gumby|8 years ago|reply
The DNS-over-http discussion in this post mention that in passing, though I wonder if this treatment might not be worse than the disease.
[+] [-] ocdtrekkie|8 years ago|reply
[+] [-] feelin_googley|8 years ago|reply
[+] [-] frut|8 years ago|reply
[+] [-] pjc50|8 years ago|reply
What are you referring to here?
[+] [-] xingped|8 years ago|reply
[+] [-] collinmanderson|8 years ago|reply
Personally, I'd say it was first spurred by Firesheep back in 2010, but the idea of encrypting all websites, even content-only websites may have been Snowden related.
[+] [-] signa11|8 years ago|reply
[+] [-] stmw|8 years ago|reply
[+] [-] mikevm|8 years ago|reply
[+] [-] unknown|8 years ago|reply
[deleted]
[+] [-] g-clef|8 years ago|reply
DNS over HTTP is a prime example: blocking outbound DNS for all but a few resolvers, and monitoring the hell out of the traffic on those resolvers is a big win for enterprise networks. What the RFC calls hostile "spoofing" of DNS responses enterprise defenders call "sinkholing" of malicious domains. Rather than trying to add a layer of validation to DNS to provide the end user with assurance that the DNS request they got really is the name they asked for (and, in theory, allow the enterprise to add their own key to sign sinkhole answers) instead DOH just throws the whole thing out...basically telling enterprise defenders "fuck your security controls, we hate Comcast too much to allow anyone to rewrite DNS answers."
"Fuck your security controls, we hate Comcast" is, I think, a bad philosophy for internet-wide protocols. (That's basically what the TLS 1.3 argument boils down to also...and that's a shame.)
[+] [-] slrz|8 years ago|reply
Forging DNS responses is a horrible idea (and already breaks with DNSSEC). I have a hard time to comprehend how this can be considered a reasonable security measure.
[+] [-] wojcikstefan|8 years ago|reply
[+] [-] kaplun|8 years ago|reply
[+] [-] jedisct1|8 years ago|reply
[+] [-] provost|8 years ago|reply
[+] [-] shawndrost|8 years ago|reply
[+] [-] adictator|8 years ago|reply
[+] [-] scott_karana|8 years ago|reply
[+] [-] enzanki_ars|8 years ago|reply
Most nativity support it now.
[+] [-] sarmad123|8 years ago|reply
[+] [-] jjjaslin7|8 years ago|reply
[deleted]
[+] [-] peterwwillis|8 years ago|reply
Which will result in all of Google being blocked by schools, businesses, and entire nations. Which, as Google is relied upon more and more, means less access to things like mail, documents, news, messaging, video content, the Android platform, etc.
Thanks.
[+] [-] jlgaddis|8 years ago|reply
A huge number of them are absolutely reliant on Google, for things like (org-wide) Google Mail, Google Docs, ChromeBook deployments, and so on -- not to mention basic Google search.