Why we still use HTTP is beyond me. And I don't mean about the speed issues. Why have a protocol that's so complicated when most of the things we need to build with it are either simpler or reimplement parts of the protocol.
Could you elaborate your issues with HTTP a bit? What kind of protocol would do a better job?
Minimal implementations of HTTP (and I'm strictly talking about the transport protocol, not about HTML, JS, ...) is dead simple and relatively easy to implement.
Of course there's a ton of extensions (gzip compression, keepalive, chunks, websockets, ...), but if you simply need to 'add HTTP' to one of your projects (and for some reason none of the existing libraries can be used) it shouldn't take too many lines of code until you can serve a simple 'hello world' site.
On top of all that, it's dead simple to put any one of the many existing reverse proxies/load balancers in front of your custom HTTP server to add load balancing, authentication, rate limiting (and all of those can be done in a standard way)
Furthermore, HTTP has the huge advantage of being readily available on pretty much every piece of hardware that has even the slightest idea of networking.
Any new technology would have to fight a steep uphill battle to convince existing users to switch.
I do see a lot of application protocols tunnelled over HTTP that have no sane reason to be. Partly to work around terrible firewalls/routers/etc. - but of course the willingness to work around those perpetuates their existence. E.g. one reason for the rise of Skype was so many crappy routers that couldn't handle SIP.
My friend once mentioned that FTP would be a good option, I'm not sure why though. I think they regarded HTTP as superfluous for the purpose of what we use the web for.
Basic http is dead simple, it works, and it also has many addons with backward compatibility (one can still use a basic http client or server in most cases) and even new version fully optimized to nowadays needs (and even in binary form)
A bunch of newline, (CRLF), seperated key-value mappings. Some with a DSL (Such as Set-Cookie).
It gives you a status message instantly, a date to check against cache, a Content-Type for your parser, acceptable encoding, for your parser, a bunch of other values for your cache. All for free.
As for the body of the content? For a gzipped value like this, it's everything outside the header, until EOF. That's not quite as easy as when the content-length parameter is given, but hardly difficult for parsing.
HTTP is easy.
In fact, HTTP is so easy, that in-complete HTTP servers can still serve up real content, and browsers can still read it.
HTTPS is more complicated, but if you simply rely on certificate stores and CAs, it becomes much easier, but HTTPS is a different protocol.
> As for the body of the content? For a gzipped value like this, it's everything outside the header, until EOF. That's not quite as easy as when the content-length parameter is given, but hardly difficult for parsing.
This is chunked and keep-
alive. Things get a little trickier
mreithub|9 years ago
Minimal implementations of HTTP (and I'm strictly talking about the transport protocol, not about HTML, JS, ...) is dead simple and relatively easy to implement.
Of course there's a ton of extensions (gzip compression, keepalive, chunks, websockets, ...), but if you simply need to 'add HTTP' to one of your projects (and for some reason none of the existing libraries can be used) it shouldn't take too many lines of code until you can serve a simple 'hello world' site.
On top of all that, it's dead simple to put any one of the many existing reverse proxies/load balancers in front of your custom HTTP server to add load balancing, authentication, rate limiting (and all of those can be done in a standard way)
Furthermore, HTTP has the huge advantage of being readily available on pretty much every piece of hardware that has even the slightest idea of networking. Any new technology would have to fight a steep uphill battle to convince existing users to switch.
Have I mentioned that it's standardized and open?
lmm|9 years ago
ue_|9 years ago
jaimehrubiks|9 years ago
shakna|9 years ago
Which are dead simple to construct, send, receive and parse.
Really.
For example, let's curl -L (view everything but the body) for the spec: http://www.ietf.org/rfc/rfc7230.txt
HTTP/1.1 200 OK
Date: Tue, 24 Jan 2017 12:00:55 GMT
Content-Type: text/plain
Transfer-Encoding: chunked
Connection: keep-alive
Set-Cookie: __cfduid=df57c7720b704a40e4c3367bbe248771c1485259254; expires=Wed, 24-Jan-18 12:00:54 GMT; path=/; domain=.ietf.org; HttpOnly
Last-Modified: Sat, 07 Jun 2014 00:41:49 GMT
ETag: W/"3247b-4fb343e4dcd40-gzip"
Vary: Accept-Encoding
Strict-Transport-Security: max-age=31536000
X-Frame-Options: SAMEORIGIN
X-Xss-Protection: 1; mode=block
X-Content-Type-Options: nosniff
CF-Cache-Status: EXPIRED
Expires: Tue, 24 Jan 2017 16:00:54 GMT
Cache-Control: public, max-age=14400
Server: cloudflare-nginx
CF-RAY: 326353e6a6a257a7-IAD
A bunch of newline, (CRLF), seperated key-value mappings. Some with a DSL (Such as Set-Cookie).
It gives you a status message instantly, a date to check against cache, a Content-Type for your parser, acceptable encoding, for your parser, a bunch of other values for your cache. All for free.
As for the body of the content? For a gzipped value like this, it's everything outside the header, until EOF. That's not quite as easy as when the content-length parameter is given, but hardly difficult for parsing.
HTTP is easy.
In fact, HTTP is so easy, that in-complete HTTP servers can still serve up real content, and browsers can still read it.
HTTPS is more complicated, but if you simply rely on certificate stores and CAs, it becomes much easier, but HTTPS is a different protocol.
motoboi|9 years ago
This is chunked and keep- alive. Things get a little trickier
thrillgore|9 years ago
recrof|9 years ago
ainiriand|9 years ago