osth's comments

osth | 12 years ago | on: KORE – A fast SPDY-capable webserver for web development in C

I trust in theory this is true, but I've never personally observed this in practice.

I guess SPDY fans' marketing of this "feature" would be more convincing if I could see a demonstration.

I just don't see any noticeable delays when using pipelining.

What strikes me as peculiar about the interest in SPDY is that I never saw any interest in pipelining before SPDY. And I really doubt it was because of potential head of line blocking or lack of header compression. I think users just were not clued in about pipelining.

The speed up between not using pipelining and using it is, IME, enormous. 1 connection for 100 files versus 100 connections for 100 files. It is a huge efficiency gain.

Yet most users have never even heard of HTTP pipelining, or never tried it. If they really wanted such a big speed up, why wouldn't they use pipelining, or at least try it? Why wouldn't they demand that browsers implement it and turn it on by default?

Users are being encouraged to jump right into SPDY, a very recent and relatively untested internal project (e.g. see the CRIME incident) of one company, most users, if not all, having never previously experimented with even basic pipelining, which has been around since the 1999 HTTP/1.1 spec and has support via keep alives in almost all web servers.

Noticeable speed gains would be seen if www pages were not so burdened with links to resources on external hosts. That's what's really slowing things down, as browsers make dozens of connections just to load a single page with little content. The speed gains from cutting out all that third party host cruft would make any speed gains from avoiding theoretical potential head of line blocking during pipelining seem miniscule and hardly worth all the effort.

If you want to see how much pipelining speeds up getting many files from the same host, you do not need SPDY to do that. Web servers already have the support you need to do HTTP/1.1 pipelining. (Though on rare occasions site admins have keep-alives disabled, like HN for example. In effect these admins are saying, "Sorry, no pipelining for you.")

osth | 12 years ago | on: KORE – A fast SPDY-capable webserver for web development in C

I choose HTTP/1.1 pipelining. Uncompressed headers are useful. Ordered records are returned (unlike SPDY), where "HTTP/1.1 200 OK" is the record separator. Been using this for a decade. Can't see the benefit of SPDY.

Anyway pipelining is only useful where numerous resources are coming from the same host. But the way the www has evolved, so much (unneeded) crap gets served from ad servers and CDN's. Pipelining isn't going to speed that up.

HTTP/1.1 pipelining was never broken. It was usually just turned off (e.g. in Firefox), while most web servers have their max keep alive set around 100. In plain English, what does that mean? It means "Dear User, You have permission to download 100 files at a time from http://stupidwebsite.com. That is you can make one request for 100 files, instead of 100 separate requests, each for a single file." And what do Firefox and other braindead web browsers do? They make a separate request for each file. But heay, never mind all those numerous connections to ad servers to retrieve marketing garbage (i.e. not the content you are after), lets concentrate on compressing HTTP headers instead. Brilliant.

It's trivial to use pipelining: 1. Feed your HTTP requests through netcat or some equivalent to retrieve the files and save them to a concatenated file, 2. split the concatenated file into separate files if desired, 3. view in your favorite browser.

No ad server BS.

Now that's "SPEEDY".

osth | 12 years ago | on: Restore the Fourth

That's a little scary. I would have figured smart programmers would know these things.

Most of the case law that has shaped this area of jurisprudence involves obvious criminals, mainly those who would be prosecuted for illegal drug possession. One could read all those cases, say, while in law school, and think "Why do we need to be so careful to observe the protections of 4th Amendment? Aren't we just protecting drug dealers and other criminals? Aren't we just making the job of the police more difficult?" But one could also conclude that it is the Constitutional principles we are exercising such caution to protect, not the obvious criminals who sometimes might escape prosecution as a result of forcing police to "follow the rules".

In the context of modern telephone and internet surveillence (which in the coming decade or two will become one in the same, when AT&T is fully transitioned to TCP/IP), one might reason that there's little need to observe the 4th Amendment as it only protects criminals, would-be criminals or citizens with "something to hide". The net is widening.

Instead of the undesirable side effect of having guilty parties (e.g. drug dealers) get away because of the hassle to police of following the rules so as not to collect inadmissible evidence, it seems like we are headed for a different sort of undesired side effect. When all evidence is by default "lawfully" collected (because it's so easy to collect it and people have over time assented to this by failing to object to it): innocent parties are likely to get swept up in what will become a massive dragnet.

page 2