top | item 8850059

HTTP/2.0 — Bad protocol, bad politics

219 points| ibotty | 11 years ago |queue.acm.org | reply

207 comments

order
[+] akerl_|11 years ago|reply
"HTTP/2.0 is not a technical masterpiece. It has layering violations, inconsistencies, needless complexity, bad compromises, misses a lot of ripe opportunities, etc."

I wish the article had spent more time talking about these things rather than rambling about "politics".

"HTTP/2.0 could have done away with cookies, replacing them instead with a client controlled session identifier."

That would have destroyed any hope of adoption by content providers and probably browsers.

"HTTP/2.0 will require a lot more computing power than HTTP/1.1 and thus cause increased CO2 pollution adding to climate change."

Citation? That said, I'm not considerably shocked that web standards aren't judged based on the impact of whatever computing devices may end up using them on their power grids and those power grids' use of energy sources.

"The proponents of HTTP/2.0 are also trying to use it as a lever for the "SSL anywhere" agenda, despite the fact that many HTTP applications have no need for, no desire for, or may even be legally banned from using encryption."

In the same paragraph, the author complains that HTTP/2.0 has no concern for privacy and then that they attempted to force encryption on everybody.

"There are even people who are legally barred from having privacy of communication: children, prisoners, financial traders, CIA analysts and so on."

This is so close to "think of the children" that I don't even know how to respond. The listed groups may have restrictions placed on them in certain settings that ensure their communications are monitored. But this doesn't prevent HTTP/2.0 with TLS from existing: there are a variety of other avenues by which their respective higher-ups can monitor the connections of those under their control.

[+] joosters|11 years ago|reply
"HTTP/2.0 could have done away with cookies, replacing them instead with a client controlled session identifier."

In fact there's no need for this to be tied in with HTTP/2.0 at all. Alternate systems could be designed without regard to HTTP/1.x or HTTP/2.y, they just have to agree on some headers to use and when to set them.

Making these kind of changes as part of a new version of HTTP would just be bloat on an already bloated spec, it is actually a good thing that the spec writers did not touch this!

[+] imajes|11 years ago|reply
Re the environmental impact: A large contingent of internet networked servers are webservers, transacting with the HTTP protocol (be it spiders or producers or whatever). If it takes more computing resources to provide HTTP/2.0, then it would in fact require more servers, and thus increase the energy consumption.

But given how large datacenter consumers are considering how to build greener/smarter operations to reduce their impact (and costs) then clearly it's something to be mindful of.

[+] nly|11 years ago|reply
> That would have destroyed any hope of adoption by content providers and probably browsers.

Bullshit, browser vendors have always lead the way on web tech. Remember XMLHttpRequest (Microsoft)? How about Javascript or SSL, both invented by Netscape? A proper session layer is inevitable for HTTP. Websites are no longer stateless, making HTTP an ill-suited protocol. We should just accept it and move on.

[+] ibotty|11 years ago|reply
> That would have destroyed any hope of adoption by content providers and probably browsers.

why adoption by browsers?

[+] jlebar|11 years ago|reply
> The same browsers, ironically, treat self-signed certificates as if they were mortally dangerous, despite the fact that they offer secrecy at trivial cost. (Secrecy means that only you and the other party can decode what is being communicated. Privacy is secrecy with an identified or authenticated other party.)

I'm frustrated to read this myth being propagated. We should know better.

In the presence of only passive network attackers, sure, self-signed certs buy you something. But we know that the Internet is chock-full of powerful active attackers. It's not just NSA/GHCQ, but any ISP, including Comcast, Gogo, Starbucks, and a random network set up by a wardriver that your phone happened to auto-connect to. A self-signed cert buys you nothing unless you trust every party in the middle not to alter your traffic [1].

If you can't know whom you're talking to, the fact that your communications are private to you and that other party is useless.

I totally agree that the CA system has its flaws -- maybe you'll say that it's no better in practice than using self-signed certs, and you might be right -- but my point is that unauthenticated encryption is not useful as a widespread practice on the web.

Browser vendors got this one right.

[1] Unless you pin the cert, I suppose, and then the only opportunity to MITM you is your first connection to the server. But then either you can never change the cert, which is a non-option, or otherwise users will occasionally have to click through a scary warning like what ssh gives. Users will just click yes, and indeed that's the right thing to do in 99% of cases, but now your encryption scheme is worthless. Also, securing first connections is useful.

[+] djcapelis|11 years ago|reply
Yes, pinning the cert, (i.e. TOFU - Trust On First Use) is exactly the right way to treat self signed certificates and under that model they offer real security. The idea that you can't do anything with self signed certs and nothing makes them okay is a much more troublesome untruth, IMO.

Rejecting self-signed certs and only allowing users to use the broken CA PKI model is the wrong choice. Browsers didn't get it right. The CA model is broken, is actually being used to decrypt people's traffic and though your browser might pin a couple big sites, won't protect the rest very well by default. It's a bad hack and we should fix the underlying issue with the PKI. I believe moxie was right, a combination of perspectives + TOFU is the way to do this.

Things that also work like this that we all rely on and generally seems more secure than most other things we use: SSH.

[+] phkamp|11 years ago|reply
self-signed-certs is about making NSA&friends work for it, rather than giving them a free ride.

Today they can grep plaintext as they want. With SSC's they would have to pinpoint what communication they really need to see.

That would be a major and totally free improvement in privacy for everybody.

As for why the browsers so consistently treat SSC's as ebola: I'm pretty sure NSA made that happen -- they would be stupid not to do so.

[+] blfr|11 years ago|reply
If there are already so many ways to fingerprint via cookies, JavaScript, Flash, etc. that it probably doesn't matter then why do cookies matter? That the EU parliament decided to pass some weird law is not much of an argument. They're not exactly known for their technological proficiency. And it did nothing for privacy. It only annoys people[1].

Sure, we could start with cookies. However, that would break a lot of the web with no immediate benefit.

On SSL everywhere (not "anywhere"), how much resources does it cost to negotiate SSL/TLS with every single smartphone in their area? Supposedly, not much[2]. I run https websites on an Atom server.

Frankly, that was rather unconvincing. Although it does seem likely than the entire process was driven by IETF trying to stay politically relevant in the face of SPDY.

[1] https://github.com/r4vi/block-the-eu-cookie-shit-list

[2] https://istlsfastyet.com/#cpu-latency

[+] stingraycharles|11 years ago|reply
For what it's worth, the cookie law in the EU is far more general than cookies, and had a totally different origin. Originally, the law was to require explicit consent when installing something on an electronic device, to combat malware. However, some clever politicians later realized that setting a cookie also "installs" something on an electronic device, and thus the law further known as the cookie law was passed.

If a webserver were to set some secure session identifier, the same laws still apply -- just as installing software without explicit consent is covered by the same law.

[+] debacle|11 years ago|reply
Cookies are a historical band-aid. The implementation is a mess, cross-domain cookies require all sorts of hacks, and there's nothing you can do with a cookie that you can't do with a session identifier.

Also, SSL gets way more complicated when you are using a CDN.

[+] zimbatm|11 years ago|reply
The article is basically a rant. I was hoping the author would go more into layer violation issues.

Most of the interesting stuff from HTTP/2.0 comes from the better multiplexing of requests over a single TCP connection. It feels like we would have been better off removing multiplexing from HTTP altogether and then adopt SCTP instead of TCP for the lower transport. Or maybe he had other things in mind.

> There are even people who are legally barred from having privacy of communication: children, prisoners, financial traders, CIA analysts and so on.

This argument is quite weak, SSL can easily be MITMed if you control the host, generate custom certs and make all the traffic go trough your regulated proxy.

[+] aidenn0|11 years ago|reply
In the early days of SPDY there was an article comparing SPDY to HTTP/1.1 over SCTP. The short version was other than the lack of header compression, it got nearly all the wins of SPDY, except:

1) Unless tunneled over UDP (which has its own problems) failed to work with NATs and stateful firewalls.

2) HTTP(s) only environments (e.g. some big corporations) would not work with it; SPDY will look enough like HTTPS to fool most of these.

3) Lack of Windows and OS X support for SCTP (without installing a 3rd party driver) means tunneling over UDP.

[+] ssalazar|11 years ago|reply
> SSL can easily be MITMed if you control the host

How is it MITM if you control the host? If you don't trust the host then you are hosed, period- there is no protocol that will save you.

[+] cromwellian|11 years ago|reply
I'll just leave this here: http://www.w3.org/Protocols/HTTP-NG/http-ng-status.html

In 1995, the process began for HTTP to be replaced by a ground up redesign. The HTTP-NG project went on for several years and failed. I have zero confidence that a ground up protocol that completely replaces major features of the existing protocol used by millions of sites and would require substantial application level changes (e.g. switching from cookies to some other mechanism) would a) get through a standards committee in 10 years and b) get implemented and deployed in a reasonable fashion.

We're far into 'worse is better' territory now. Technical masterpieces are the enemy of the good. It's unlikely HTTP is going to be replaced with a radical redesign anymore than TCP/IP is going to be replaced.

Reading PHK's writings, his big problem with HTTP/2 seems to be that it is not friendly to HTTP routers. So, a consortium of people just approved a protocol that does not address the needs of one's major passion, http routers, and a major design change is desired to support this use case.

I think the only way HTTP is going to be changed in that way is if it is disrupted by some totally new paradigm, that comes from a new application platform/ecosystem, and not as an evolution of the Web. For example, perhaps some kind of Tor/FreeNet style system.

[+] stephen_g|11 years ago|reply
I really don't understand this bit:

> Local governments have no desire to spend resources negotiating SSL/TLS with every single smartphone in their area when things explode, rivers flood, or people are poisoned.

I remember some concerns about performance of TLS five to ten years ago, but these days is anybody really worried about that? I remember seeing some benchmarks (some from Google when they were making HTTPS default, as well as other people) that it hardly adds a percent of extra CPU or memory usage or something like that.

Also, these days HTTPS certificates can be had for similar prices to domains, and hopefully later this year the Let's Encrypt project should mean free high quality certificates are easily available.

With that in mind, forcing HTTPS is pretty much going to be only a good thing.

[+] joosters|11 years ago|reply
Twenty-six years later, [...] the HTTP protocol is still the same.

Not true at all. Early HTTP (which became known as HTTP/0.9) was very primitive and very different from what is used today. It was five or six years until HTTP/1.0 emerged, with a format similar to what we have today.

[+] TazeTSchnitzel|11 years ago|reply
Thank you, I was going to point it out myself. Early HTTP was literally just this:

  GET /somepath
That's it. Nothing more (well, that and a CRLF), nothing less. The response was equally barren: just pure HTML. Existent page? HTML. Non-existent page? HTML error message. Plaintext file? HTML. (The text is wrapped in a <plaintext> tag.) Anything else? Probably HTML, though you could also deliver binary files this way (good luck reliably distinguishing HTML and binary without a MIME type)!

I actually like HTTP/0.9. If you're stuck in some weird programming language without an HTTP/1.1 client (HTTP/1.0 is useless because it lacks Host:, while HTTP/0.9 actually does support shared hosts, just use a fully-qualified URI) you can just open a TCP port to a web server and send a GET request the old fashioned way.

[+] bayesianhorse|11 years ago|reply
After learning how public key infrastructure really works, I've become quite disillusioned by the security it seems to provide. After all, what use is a certification authority, if basically any authoritarian state and intelligence service can get on that list? In some countries the distinction between government officials, spies and organized crime is already extremely blurry...
[+] skywhopper|11 years ago|reply
Interesting that one of the primary developers of FreeBSD does not understand one of the two major use cases for SSL, namely identity assurance. News sites and local governments don't care about privacy when transmitting news or emergency information, true, but citizens should be concerned about making sure that information is coming from who they think it's coming from.

I'm no fan of HTTP/2, but this article does not effectively argue against it. Too many bare assertions without any meat to them. And when you fail to mention a major purpose of a protocol (SSL) you dismiss as useless, you lose a lot of credibility.

[+] phkamp|11 years ago|reply
SSL does not provide identity assurance (=authentication), the CA-cabal does, SSL just does the necessary math for you.

CA's are trojaned, that's documented over and over by bogus certs in the wild, so in practice you have no authentication when it comes down to it.

Authentication is probably the hardest thing for us, as citizens to get, because all the intelligence agencies of the world will attempt to trojan it.

Secrecy on the other hand, we can have that trivially with self-signed certs, but for some reason browsers treat those as if they were carriers of Ebola.

[+] mst|11 years ago|reply
I would have found your comment rather more useful had you led with the technical point rather than an attack on the author.
[+] cflat|11 years ago|reply
Nitpick - It's HTTP/2 not HTTP/2.0

We've all learned from the failure of SNI and IPv6 to gain widespread adoption. (Thank you windows xp and Android 2.2) HTTP/2 has been designed with the absolute priority of graceful backward compatibility. This creates limits and barriers on what you can do. Transparent and graceful backward compatibility will be essential for adoption.

I agree, HTTP/2 is Better - not perfect. But better is still better.

[+] dragonwriter|11 years ago|reply
> The proponents of HTTP/2.0 are also trying to use it as a lever for the "SSL anywhere" agenda, despite the fact that many HTTP applications have no need for, no desire for, or may even be legally banned from using encryption.

What is the basis of this claim? ISTR that SPDY and the first drafts of HTTP/2 were TLS-only, and that some later drafts had provisions which either required or recommended TLS on public connections but supported unencrypted TCP for internal networks, but the current version seems to support TLS and unencrypted TCP equally.

[+] wmf|11 years ago|reply
Unencrypted HTTP/2 is a fake concession that isn't usable in the real world.
[+] teddyh|11 years ago|reply
As I keep having to mention: The omission of the use of SRV records is maddening, and the reasons given don’t make any sense.

https://news.ycombinator.com/item?id=8550133

https://news.ycombinator.com/item?id=8404788

[+] kstrauser|11 years ago|reply
I so much wish they would adopt SRV records. I've used them many times for load balancing internal HA web services and love the freedom they give you to specify failover tiers and push higher loads toward beefier servers.

Honestly, SRV records cover about 90% of the usage I've seen people deploy ZooKeeper or etcd for. I'd love to see them become the standard way of doing such things.

[+] tobz|11 years ago|reply
From the responses given to your linked comments, it seems like there was a technically valid reason not to use them: performance You just kept pushing that it wasn't a good enough reason. Are you just going to keep posting and asking until people decide you're right and they're wrong?
[+] ak217|11 years ago|reply
The argument about computing power and CO2 pollution is misguided. HTTP/2 no longer requires encryption, so the TLS/non-TLS trade-offs remain the same as before (and their compute impact is mitigated by hardware AES support, etc.). The other relevant changes (SPDY, header compression, push) reduce the number of context switches and network round-trips required and the total time required for devices to spend in high power mode, and for the user to spend waiting. That results in a reduction, not increase, in total power consumption.

Taking server CPU utilization numbers as an indicator of total power consumption is pretty misguided in this context, and my understanding is that even those are optimized (and will continue to be optimized) to the point where TLS and SPDY have negligible overhead (or, in the case of SPDY, may even result in lower CPU usage).

[+] phkamp|11 years ago|reply
Show me the mainstream browsers that will use HTTP/2 without SSL/TLS ?

The difference between you and me, may be that I have spent a lot of time measuring computers power usage doing all sorts of things. You seem to be mostly guessing ?

[+] ibotty|11 years ago|reply
phk (poul henning kamp) is the lead developer of varnish, in case people are not familiar with him.
[+] bad_user|11 years ago|reply
"HTTP/2.0 could have done away with cookies, replacing them instead with a client controlled session identifier."

It doesn't make much sense to get rid of cookies alone, not when there are multiple ways of storing stuff in a user's browser, let alone for fingerprinting - http://samy.pl/evercookie/

Getting rid of cookies doesn't really help with privacy at this point and just wait until IPv6 becomes more widespread. Speaking of which, that EU requirement is totally stupid.

The author also makes the mistake of thinking that we need privacy protections only from the NSA or other global threats. That's not true, we also need privacy protections against local threats, such as your friendly local Internet provider, that can snoop in on your traffic and even inject their own content into the web pages served. I've seen this practice several times, especially on open wifi networks. TLS/SSL isn't relevant only for authentication security, but also for ensuring that the content you receive is the content that you asked for. It's also useful for preventing middle-men from seeing your traffic, such as your friendly network admin at the company you're working for.

For example if I open this web page with plain HTTP, a middle-man can see that I'm reading a rant on HTTP/2.0, instead of seeing just a connection to queue.acm.org. From this it can immediately build a useful profile for me, because only somebody with software engineering skills would know about HTTP, let alone read a rant on IETF's handling of version 2.0. It could also inject content, such as ads or a piece of Javascript that tracks my movement or whatever. So what's that line about "many HTTP applications have no need for [SSL]" doing in a rant lamenting on privacy?

HTTP/2.0 probably has flaws, but this article is a rant about privacy and I feel that it gets it wrong, as requiring encrypted connections is the thing that I personally like about HTTP/2.0 or SPDY. Having TLS/SSL everywhere would also make it more costly for the likes of NSA to do mass surveillance of user's traffic, so it would have benefits against global threats as well.

[+] peterwwillis|11 years ago|reply
"HTTP/2.0 will be SSL/TLS only"

Yes! Finally 99% of users won't be hacked by a default initial plaintext connection! We finally have safe(r) browsing.

", in at least three out of four of the major browsers,"

You had ONE JOB!

Jokes aside, privacy wasn't a consideration in this protocol. Mandatory encryption is really useful for security, but privacy is virtually unaffected. And the cookie thing isn't even needed; every browser today could implement a "click here to block cookies from all requests originating from this website" button.

We need the option to remove encryption. But it should be the opposite of what we currently do, which is to default to plaintext unless you type an extra magic letter into the address (which no user ever understands, and is still potentially insecure). We should be secure by default, but allow non-secure connections if you type an extra letter. Proxies could be handled this way by allowing content providers to explicitly mark content (or domains) as plaintext-accessible.

The problem I fear is as everyone adopts HTTP/2 and HTTP/1.1 becomes obsolete (not syntactically but as a strict protocol) it may no longer be possible to write a quick-and-dirty HTTP implementation. Before I could use a telnet client on a router to test a website; now the router may need an encryption library, binary protocol parser, decompression and multiplexing routines to get a line of text back.

[+] higherpurpose|11 years ago|reply
HTTPS can also be used to protect you from malware [1] [2] and stop censorship [3]. If anything, news sites should be among the first to adopt strong HTTPS connections since many people visit them and the news also needs to not be censored.

[1] https://citizenlab.org/2014/08/cat-video-and-the-death-of-cl...

[2] http://www.ap.org/Content/AP-In-The-News/2014/AP-Seattle-Tim...

[3] http://ben.balter.com/2015/01/06/https-all-the-things/

As for the performance side, SPDY is probably not perfect, but it seems to generally improve over current HTTP, even if it uses secure connection. But even if it didn't, using HTTPS seems to add negligible overhead, and compared to the security it gives I think it's well worth it.

https://www.httpvshttps.com/

[+] cnst|11 years ago|reply
A very well written rant.

HTTP is supposed to have had opportunistic encryption, as per RFC 7258 (Pervasive Monitoring Is an Attack, https://news.ycombinator.com/item?id=7963228), but it looks like the corporate overlords don't really understand why is it at all a problem for the independent one-man projects to acquire and update certificates every year, for every little site.

As per a recent conversation with Ilya Grigorik over at nginxconf, Google's answer to the cost and/or maintenance issues of https --- just use CloudFlare! Because letting one single party do MITM for the entire internet is so sane and secure, right?

[+] arielby|11 years ago|reply
What's exactly the difference between cookies and session identifiers exactly? There's no law requiring you to send kilobytes of cookies (news.ycombinator.com gets by with a 22-byte cookie). Of course the way HTTP cookies handle ambient authority is rather imperfect, but that can be solved within the system.
[+] phkamp|11 years ago|reply
The difference is who makes the decisions: session id is controlled by the client. cookies by the server.

FaceBook, Twitter etc. track you all over the internet with their cookies, even if you don't have an account with them, whenever a site puts up one of their icons for you to press "like".

With client controlled session identifies, uses would get to choose if they wanted that.

The reason YC gets by with 22 bytes is probably that they're not trying to turn the details of your life into their product.

[+] kalleboo|11 years ago|reply
> The so-called "multimedia business," which amounts to about 30% of all traffic on the net, expresses no desire to be forced to spend resources on pointless encryption.

I thought that "pointless encryption" was basically the definition of DRM? And the largest video site, traffic-wise (YouTube) is already encrypted.

[+] stingraycharles|11 years ago|reply
I thought Netflix was the largest video site, traffic-wise?