Wired: How has your thinking about design changed over the past decades?
Brooks: When I first wrote The Mythical Man-Month in 1975, I counseled
programmers to “throw the first version away,” then build a second one.
By the 20th-anniversary edition, I realized that constant incremental
iteration is a far sounder approach. You build a quick prototype and
get it in front of users to see what they do with it. You will always
be surprised.
I am not sure how well that applies to a protocol that is supposed to be codified in every single browser, web server and utility library in the world. The iteration cycle will be slow, and improvements can not happen overnight.
Now, if the name was something like HTTP/1.8-alpha it might be a different thing. At least then it wouldn't carry the label of the "next big thing for everyone". It's sad, but names (and branding) do matter. Forcing a known-broken implementation upon the world is not exactly good engineering.
"we found
out that there are numerous hard problems that SPDY doesn't even
get close to solving, and that we will need to make some simplifications
in the evolved HTTP concept if we ever want to solve them."
This usually means "ending up with a protocol that has a lot of corner cases and a lot of backward-compatible crap or maybe some half-baked stuff that was left because of some feature that nobody uses"
I am very skeptic of protocols/standards that are born from a committee (take a look at telephony protocols/standards if you doubt me)
> In the old days we had different protocols for different use cases. We had FTP and SSH and various protocols for RPC. Placing all our networking needs over HTTP was driven by the ubiquitous availability of HTTP stacks, and the need to circumvent firewalls. I don’t believe a single protocol can be optimal in all scenarios. So I believe we should work on the one where the pain is most obvious - the web - and avoid trying to solve everybody else’s problem.
If we're not careful, we're just going to end up cycling back around again and find ourselves 20 years in the past.
That said, I do think to some extent "that ship has sailed". The future of network programming seems like it will be "TCP --> HTTP -(upgraded connection)-> WebSockets --> actual application layer protocol". See, for example, STOMP over WebSockets. While it is annoying that this implies we've added a layer to the model, it's hard to argue with the real-world portability/ease of development that this all has enabled.
The importance of firewall punching can't be overstated. There are plenty of end users in workplaces or on other people's wifi who find that all outgoing ports other than 80 and 443 are blocked. Yes, this is incredibly stupid, but they're not going to do anything about it.
I am so glad there is at least one prominent name advocating this line, because I feel like this quote from another IETF discussion is becoming more and more relevant:
HTTP/2.0 has been rammed through much faster than is reasonable for the next revision of the bedrock of the web. It was always clearly a single-bid tender for ideas, with the call for proposals geared towards SPDY and the timeline too short for any reasonable possibility of a competitive idea to come up.
There has never been any good reason that SPDY could not co-evolve with HTTP as it had already been doing quite successfully. If it was truly the next-step it would have been clear soon enough. All jamming it through to HTTP/2.0 does is create a barrier for entry for similar co-evolved ideas to come about and compete on even footing.
He wants radical change in the protocol but when given the opportunity submitted a (by his own admission) half baked proposal - there's also the question of what a protocol like HTTP/2 means for his product.
Although HTTP/2 started from SPDY it has evolved, and in different ways e.g. see the framing comments from the thread the OP links to.
We need a better protocol for the web now, yes we could wait around longer for more discussion but where did that get us with HTTP/1.1 - I'd be quite happy if IETF had just adopted SPDY lock, stock and barrel (and no I don't work for Google)
"So it looks like HTTP 2 really needs (at least) two different profiles, one for web hosting/web browser users ("HTTP 2 is web scale!") and one for HTTP- as-a-substrate users. The latter should have (or more accurately should not have) multiple streams and multiplexing, flow control, priorities, reprioritisation and dependencies, mandatory payload compression, most types of header compression, and many others."
"First and foremost, it needs to be recognized that HTTP/2 has been designed from the start to primarily meet the needs of a very specific grouping of high volume web properties and browser implementations. There is very little evidence that ubiquitous use of the protocol is even a secondary consideration -- in fact, the "they can just keep using HTTP/1.1" mantra has been repeated quite often throughout many
of the discussions here on this, usually as a way of brushing aside many of the concerns that have been raised. So be it. It's clear at this point that HTTP/2 is on a specific fixed path forward and that, for the kinds of use cases required by IoT, alternatives will need to be pursued."
An aside: I find it odd how HN users jump to agreement when a link to a single mailing-list message is posted, ignoring other discussion on the thread. I think it's because the UI makes it hard to see the rest of the conversation (unlike -say- the comments UI on HN itself)
To put the comment and its author in context, Poul-Henning Kamp is the main developer of Varnish, a widely used high-performance standard-compliant HTTP cache.
PHK has experience of HTTP both from the server point of view (the main job of Varnish is acting as a fast HTTP server) and from client point of view (Varnish acts as client to the slow upstream HTTP servers).
For those of us who don't follow this discussion in detail, why are you thinking the protocol needs to be scrapped outright rather than modified? Is it simply the complexity Greg Wilkins mentioned or are you really thinking about bigger philosophical changes like dropping cookies as we know them? Dropping HPACK seems like a great engineering call but that seems like a relatively minor change rather than starting over.
http://lists.w3.org/Archives/Public/ietf-http-wg/2014AprJun/... made me wonder what your standby plan would be – let the people who really need to care about performance use SPDY until a more ambitious HTTP 2.0 stabilizes? One of the concerns I have is that many people want performance now and it seems like HTTP 2.0 might turn into the next XHTML if it takes too long to emerge.
Of course we don't live in Plan-9's 9P world, and I don't think we will ever live in that world, but if you think about it, it makes a lot more sense. In every aspect. 9P could make lots of troubled / tied (think XML standard such as WebDAV) / historical standards (FTP, NFS) obsolete. It is sane, simple, fast and secure because it is just a stream of bytes, displayed as a filesystem. There is no need for tons of librarycode. And http/0.2 could be backwards compatible. With http/0.2 you could also have session id. Besides that, with mounting httpfs there is no absolute need for a browser. You could use the standard commandline tools, altough the browser is gonna be used in almost any case.
All I wanna say is that I agree with your ideas. HTTP/2 is probably gonna have a long breath, so thinking it over, and start from scratch would be a good idea IMO. With 9P it could be a real dealmaker.
I think it makes sense to take the time to get it right. Every version of HTTP will effectively have to be supported by just about every server and client forever or otherwise the web will break.
An incredible aspect of the web is that Tim Berners-Lee's first website back at CERN still works in modern browsers. Same with things like basically the entire Geocities archive.
When it gets to core infrastructure like HTTP you can't just iterate quickly and expect the entire internet to constantly upgrade along with you.
What works for early stage lean startups won't work here.
What an awful way to try and get a point across. An aggressive tone and negative words baked into every other sentence, making the statements very loaded. There's probably a lot of missing context from viewing only this link, though.
I think that one of the best things about HTTP/1.0 (and to a lesser extent 1.1) is its simplicity. The reason, to me, that that simplicity is so vital is because it has fostered large amounts of innovation.
It should be noted that the sender of this e-mail is Poul-Henning Kamp, known among other things for another e-mail from back in the day (relatively speaking): http://bikeshed.com/
Mobile is one of them, although OP's arguments are valid as well. (http://conferences.sigcomm.org/co-next/2013/program/p303.pdf)
Also, among many other things: every thing over SSL. Single Connection.
Also, less of a technical problem, but as many already mentioned, too complicated as compared to plain text human readable HTTP 1.x
Would it be accurate to suggest that the rushing of Google's SPDY, as HTTP/2.0, through IETF standardisation is roughly equivalent to the situation a few years ago when Microsoft pushed Office Open XML through as an ECMA standard? Or is that just a huge mischaracterisation?
There are a few different implementations of SPDY, and a clear use case where it applies. Also, it's a clear standard, made for being used.
What's happening here is that there is a group of very active people that create most of the software we use on the web, and have a use case they want to support. At the same time, there are lots and lots of people that are not as active, with a huge amount of use cases that will be hindered, but since they are not active, they have very little voice.
I'm in favor of dumping HTTP/S and using a faster and more secure (by default) Transport protocol altogether. Post Snowden revelations we should be focusing our energy on that, rather than continuing to hack around this old protocol to make it faster and more secure.
MinimaLT [1] comes to mind. Minimal latency through better security sounds very appealing, especially when it's not a marketint trick but a paper signed by people like DJB.
The way I see it though, is not only to have a protocol, but how to get adoption. Especially when you're talking about network protocols, you need rock solid stacks in all major operating systems which is not an easy feat to accomplish.
Well, SPDY might be a "prototype", but it's solving real problems today - I care less if it's perfect or not, if it solves all problems or not, as long as it's easy to implement, has a decent footprint, and offers significant improvements over HTTP/1.1. An imperfect working prototype is better than a perfect blueprint that materializes in distant future where problems and environment can differ greatly.
Furthermore, if http/3.0 is already being discussed, why not just skip http/2.0 entirely, and live with the current http/1.1+SPDY situation until the work towards a new standard for http is actually done?
But then you need to support 1.0, 1.1, 2.0 and 3.0. I'd rather wait a bit to avoid need to support yet another version. I can bet every new version of HTTP will cost millions of dollars across the industry. It is not agile where you just drop in a new increment. This is the base everybody need to support then and you won't be able to stop support in foreseeable future.
Sounds to me like the real problem is lack of IP addresses, and the best strategy would be to hold off on updating HTTP and work on IPv6 ubiquity first. I can see why Google went a different route, but we don't all have to follow.
Just like they admitted that XHTML 2 was a mistake and scraped it, I feel they should do the same with this nonsense.
Nothing about spdy or http2.0 sparks any sort of confidence with regard to proper robust protocol design, keeping things simple nor properly separating concerns.
It's funny that you mention XHTML 2, because I think it demonstrates the opposite of what you are arguing.
XHTML 2 is a lot more like what PHK is proposing: an attempt to "rethink" HTML, come up with something simpler, revolutionary rather than evolutionary, "The Right Thing." It was an attempt to reinvent the space from first principles, and had lots of ideas that were theoretically good but unproven at large scale.
When that went nowhere, the world settled on HTML5: evolutionary, incremental, and based on standardizing existing practice. Much less sexy, but more useful in practice.
There is a time and a place for bold new ideas, but a standards body designing v2 of a protocol isn't it. Standards are for codifying proven ideas. When standards bodies try to innovate you end up with XHTML, VRML, P3P, SPARQL, etc.
> Just like they admitted that XHTML 2 was a mistake and scraped it
But "they" (the W3C) didn't do that when people were just complaining about issues with the XHTML 2.0 approach, they did it after a competing approach was developed via an extensive, multi-year process through an outside group (WHATWG), and even then only that after a short period when both approaches were the focus of official W3C working groups.
They didn't adopt a "this is limited, lets throw it away and start over" approach as the original article here calls for with regard to HTTP/2.0.
You know what we need, we need to pick one of those people and give 'em one day to invent HTTP/2.0 and it'll be a better spec compared to letting them all "decide" together by nerd-fighting each other into eternity.
No standard is perfect, but the worst standard is no standard.
[+] [-] taspeotis|12 years ago|reply
[+] [-] bostik|12 years ago|reply
Now, if the name was something like HTTP/1.8-alpha it might be a different thing. At least then it wouldn't carry the label of the "next big thing for everyone". It's sad, but names (and branding) do matter. Forcing a known-broken implementation upon the world is not exactly good engineering.
[+] [-] orkoden|12 years ago|reply
[+] [-] raverbashing|12 years ago|reply
And I'm very afraid of the following phrase:
"we found out that there are numerous hard problems that SPDY doesn't even get close to solving, and that we will need to make some simplifications in the evolved HTTP concept if we ever want to solve them."
This usually means "ending up with a protocol that has a lot of corner cases and a lot of backward-compatible crap or maybe some half-baked stuff that was left because of some feature that nobody uses"
I am very skeptic of protocols/standards that are born from a committee (take a look at telephony protocols/standards if you doubt me)
[+] [-] aaron695|12 years ago|reply
It was revolutionary at the time but people have moved on and found many improvements to the original and also outright mistakes.
(As the author seems to have acknowledged when releasing a new improved iteration of his book.)
[+] [-] jballanc|12 years ago|reply
> In the old days we had different protocols for different use cases. We had FTP and SSH and various protocols for RPC. Placing all our networking needs over HTTP was driven by the ubiquitous availability of HTTP stacks, and the need to circumvent firewalls. I don’t believe a single protocol can be optimal in all scenarios. So I believe we should work on the one where the pain is most obvious - the web - and avoid trying to solve everybody else’s problem.
If we're not careful, we're just going to end up cycling back around again and find ourselves 20 years in the past.
That said, I do think to some extent "that ship has sailed". The future of network programming seems like it will be "TCP --> HTTP -(upgraded connection)-> WebSockets --> actual application layer protocol". See, for example, STOMP over WebSockets. While it is annoying that this implies we've added a layer to the model, it's hard to argue with the real-world portability/ease of development that this all has enabled.
[+] [-] pjc50|12 years ago|reply
[+] [-] icebraining|12 years ago|reply
It's not like WebSockets runs "over" HTTP, it just uses an HTTP-like handshake; besides that, it's just a simple framing protocol over TCP.
That's why you can use a simple proxy to "websocket-ify" applications that use plain-old raw TCP connections.
[+] [-] stormbrew|12 years ago|reply
> Is there an IETF process in place for "The work we're doing would harm the Internet so maybe we should stop?" - http://www.ietf.org/mail-archive/web/trans/current/msg00238....
HTTP/2.0 has been rammed through much faster than is reasonable for the next revision of the bedrock of the web. It was always clearly a single-bid tender for ideas, with the call for proposals geared towards SPDY and the timeline too short for any reasonable possibility of a competitive idea to come up.
There has never been any good reason that SPDY could not co-evolve with HTTP as it had already been doing quite successfully. If it was truly the next-step it would have been clear soon enough. All jamming it through to HTTP/2.0 does is create a barrier for entry for similar co-evolved ideas to come about and compete on even footing.
[+] [-] youngtaff|12 years ago|reply
He wants radical change in the protocol but when given the opportunity submitted a (by his own admission) half baked proposal - there's also the question of what a protocol like HTTP/2 means for his product.
Although HTTP/2 started from SPDY it has evolved, and in different ways e.g. see the framing comments from the thread the OP links to.
We need a better protocol for the web now, yes we could wait around longer for more discussion but where did that get us with HTTP/1.1 - I'd be quite happy if IETF had just adopted SPDY lock, stock and barrel (and no I don't work for Google)
[+] [-] justincormack|12 years ago|reply
"So it looks like HTTP 2 really needs (at least) two different profiles, one for web hosting/web browser users ("HTTP 2 is web scale!") and one for HTTP- as-a-substrate users. The latter should have (or more accurately should not have) multiple streams and multiplexing, flow control, priorities, reprioritisation and dependencies, mandatory payload compression, most types of header compression, and many others."
http://lists.w3.org/Archives/Public/ietf-http-wg/2014AprJun/...
"First and foremost, it needs to be recognized that HTTP/2 has been designed from the start to primarily meet the needs of a very specific grouping of high volume web properties and browser implementations. There is very little evidence that ubiquitous use of the protocol is even a secondary consideration -- in fact, the "they can just keep using HTTP/1.1" mantra has been repeated quite often throughout many of the discussions here on this, usually as a way of brushing aside many of the concerns that have been raised. So be it. It's clear at this point that HTTP/2 is on a specific fixed path forward and that, for the kinds of use cases required by IoT, alternatives will need to be pursued."
[+] [-] youngtaff|12 years ago|reply
(although their tendency to fill sites full of third party components may reduce some of its benefits)
[+] [-] zhyder|12 years ago|reply
An aside: I find it odd how HN users jump to agreement when a link to a single mailing-list message is posted, ignoring other discussion on the thread. I think it's because the UI makes it hard to see the rest of the conversation (unlike -say- the comments UI on HN itself)
[+] [-] gioele|12 years ago|reply
PHK has experience of HTTP both from the server point of view (the main job of Varnish is acting as a fast HTTP server) and from client point of view (Varnish acts as client to the slow upstream HTTP servers).
As a side note, he also refrained for years from adding TLS support to Varnish after his review of OpenSSL and SSL in general (see https://www.varnish-cache.org/docs/trunk/phk/ssl.html ).
[+] [-] ruben_varnish|11 years ago|reply
[+] [-] themgt|12 years ago|reply
It does seem a little shocking that the WG chair is proposing last call while still there's serious discussion of things like dropping HPACK.
[+] [-] phkamp|12 years ago|reply
Poul-Henning
[+] [-] acdha|11 years ago|reply
http://lists.w3.org/Archives/Public/ietf-http-wg/2014AprJun/... made me wonder what your standby plan would be – let the people who really need to care about performance use SPDY until a more ambitious HTTP 2.0 stabilizes? One of the concerns I have is that many people want performance now and it seems like HTTP 2.0 might turn into the next XHTML if it takes too long to emerge.
[+] [-] gerardvanvooren|11 years ago|reply
Of course we don't live in Plan-9's 9P world, and I don't think we will ever live in that world, but if you think about it, it makes a lot more sense. In every aspect. 9P could make lots of troubled / tied (think XML standard such as WebDAV) / historical standards (FTP, NFS) obsolete. It is sane, simple, fast and secure because it is just a stream of bytes, displayed as a filesystem. There is no need for tons of librarycode. And http/0.2 could be backwards compatible. With http/0.2 you could also have session id. Besides that, with mounting httpfs there is no absolute need for a browser. You could use the standard commandline tools, altough the browser is gonna be used in almost any case.
All I wanna say is that I agree with your ideas. HTTP/2 is probably gonna have a long breath, so thinking it over, and start from scratch would be a good idea IMO. With 9P it could be a real dealmaker.
[+] [-] lerouxb|12 years ago|reply
An incredible aspect of the web is that Tim Berners-Lee's first website back at CERN still works in modern browsers. Same with things like basically the entire Geocities archive.
When it gets to core infrastructure like HTTP you can't just iterate quickly and expect the entire internet to constantly upgrade along with you.
What works for early stage lean startups won't work here.
[+] [-] maaaats|12 years ago|reply
[+] [-] chacham15|12 years ago|reply
[+] [-] sebcat|12 years ago|reply
[+] [-] brianpgordon|12 years ago|reply
[+] [-] fredliu|11 years ago|reply
[+] [-] quasque|12 years ago|reply
[+] [-] marcosdumay|12 years ago|reply
There are a few different implementations of SPDY, and a clear use case where it applies. Also, it's a clear standard, made for being used.
What's happening here is that there is a group of very active people that create most of the software we use on the web, and have a use case they want to support. At the same time, there are lots and lots of people that are not as active, with a huge amount of use cases that will be hindered, but since they are not active, they have very little voice.
[+] [-] higherpurpose|12 years ago|reply
[+] [-] marios|12 years ago|reply
The way I see it though, is not only to have a protocol, but how to get adoption. Especially when you're talking about network protocols, you need rock solid stacks in all major operating systems which is not an easy feat to accomplish.
[1]: http://cr.yp.to/tcpip/minimalt-20130522.pdf
[+] [-] kolev|12 years ago|reply
[+] [-] adamtulinius|12 years ago|reply
Furthermore, if http/3.0 is already being discussed, why not just skip http/2.0 entirely, and live with the current http/1.1+SPDY situation until the work towards a new standard for http is actually done?
[+] [-] prohor|12 years ago|reply
[+] [-] justincormack|12 years ago|reply
[+] [-] unknown|12 years ago|reply
[deleted]
[+] [-] alexnewman|11 years ago|reply
[+] [-] cwp|12 years ago|reply
[+] [-] josteink|11 years ago|reply
Nothing about spdy or http2.0 sparks any sort of confidence with regard to proper robust protocol design, keeping things simple nor properly separating concerns.
[+] [-] haberman|11 years ago|reply
XHTML 2 is a lot more like what PHK is proposing: an attempt to "rethink" HTML, come up with something simpler, revolutionary rather than evolutionary, "The Right Thing." It was an attempt to reinvent the space from first principles, and had lots of ideas that were theoretically good but unproven at large scale.
When that went nowhere, the world settled on HTML5: evolutionary, incremental, and based on standardizing existing practice. Much less sexy, but more useful in practice.
There is a time and a place for bold new ideas, but a standards body designing v2 of a protocol isn't it. Standards are for codifying proven ideas. When standards bodies try to innovate you end up with XHTML, VRML, P3P, SPARQL, etc.
[+] [-] dragonwriter|11 years ago|reply
But "they" (the W3C) didn't do that when people were just complaining about issues with the XHTML 2.0 approach, they did it after a competing approach was developed via an extensive, multi-year process through an outside group (WHATWG), and even then only that after a short period when both approaches were the focus of official W3C working groups.
They didn't adopt a "this is limited, lets throw it away and start over" approach as the original article here calls for with regard to HTTP/2.0.
[+] [-] mantrax5|12 years ago|reply
No standard is perfect, but the worst standard is no standard.
Make up your fucking mind already.