top | item 9525572

HTTP/2 for TCP/IP Geeks

109 points| akerl_ | 11 years ago |daniel.haxx.se | reply

36 comments

order
[+] gizzlon|11 years ago|reply
Nice recap, mostly understandable with just the slides.

I was not aware that my firefox now use http2 when speaking with google etc, kind of cool.

You can check it with the Network monitor (Ctrl-Shift-Q). Reaload the page, clck on a request an look for "Version" beneath "Status code".

[+] yellowapple|11 years ago|reply
> HTTP/2 is used 10 times more than HTTP/1.0

Isn't the usage of HTTP/1.0, like, zero? Does anyone (other than maybe some minimal embedded applications that probably won't upgrade to HTTP/2 anyway) actually use HTTP/1.0 instead of HTTP/1.1?

[+] bagder|11 years ago|reply
The usage share there referred to stats from Mozilla and Firefox 36, where HTTP/1.0 is seen in 1% of all HTTP responses compared to 10% for HTTP/2. / Daniel - author of those slides
[+] FooBarWidget|11 years ago|reply
Nginx's proxy module uses HTTP 1.0 by default.
[+] digi_owl|11 years ago|reply
Something of an aside, but i have come to understand that the major issue for adopting IPv6 was hardware.

This both in terms of existing router installs, and endpoint devices that can't be flashed to grok IPv6.

[+] azernik|11 years ago|reply
My impression (from seeing IPv6 support projects being regularly deprioritized at a previous employer) is that it's more of a market/deployment problem than a technological one. Layer 3 is a uniquely annoying place in the current networking stack, in that changes to it are useless until everything on the path between you and the other hosts you talk to speak IPv6.

With upper-layer stuff like HTTP2, you can get some return on your investment as long as two hosts that want to talk to each other, however distantly separated, speak the new protocol/features. With lower-layer stuff like 802.11ac or 1GB ethernet, you can get most of the benefits just by upgrading the client hardware and infrastructure in a local setting. If you want to talk over IPv6, though, you need to upgrade clients, local infrastructure (e.g. in-home routers, corporate IT equipment), carrier routers, datacenter routers, and servers. In many cases, each of those things is controlled by a separate organization, and none of those organizations gets any benefit out of implementing IPv6 until everyone else along the chain does so too.

For example - let's say that you're providing some kind of web app or service from your own servers. There's a non-zero effort involved in setting up IPv4/6 dual stack throughout your stack, so you first check if you can even get IPv6 connectivity. Using EC2? Nope - no IPv6. Using a leased server in a colo? Pretty good chance they'll also not even have it available as an option. So at that point, why even work on this feature that you're not going to get to use, and that since it's not exercised by real traffic will probably be full of bugs?

[+] exelius|11 years ago|reply
Well, kind of... It's less about not being able to grok IPv6 and more about these routers having limited memory available. IPv6 addresses take up 4x as much memory as IPv4 addresses, and that adds up pretty quickly in a routing table. As routers are typically integrated devices, you can't just go "add more RAM", so sticking with IPv4 effectively quadruples your capacity.
[+] aidenn0|11 years ago|reply
It's more complicated than that; nobody with an ipv4 address had an incentive to switch, and the ipv6 committee addressed the wrong problems in trying to encourage adoption. As IPv4 addresses will likely become more valuable in the future, then there will be a cost incentive to switch.

For another take on the matter, see: http://cr.yp.to/djbdns/ipv6mess.html in particular a good summary of one issue is under "Another way to understand the costs"

[+] rasz_pl|11 years ago|reply
Major issue for adopting IPv6 was feature bloat, everyone and their grandma wanted something in the standard and we ended up with complicated mess.
[+] EvanPlaice|11 years ago|reply
While HTTP/2 will lead to some huge improvements in efficiency it's going to take some time for the web to collectively forget a decade's worth of kludges otherwise known as 'best practices.'

Domain sharding, image spriting, script concatenation, lots of unnecessary intermediate caching, etc. All of these were created with one goal. Faster page load times (via connection thrashing) meant more Google PR juice. All of them have hidden costs that add to development complexity and unnecessarily increase load on the servers-side. IMHO, all should die a fiery death.

If we're really shooting for a faster and more efficient web experience, HTTP/2 solves the greatest constraint on the back-end (ie allowing multiple connections over a single TCP stream).

What we need next are a change in priorities defined by a new standard of 'best practices' such as:

1) Stop concatenating scripts

Assuming HTTP/2 and no 1 request/asset constraint what's the point? Sure, it may lead to to a 10-20% increase in compression overall but only if the user navigates to every page on your site.

2) Stop minifying scripts

Controversial? Maybe, but why are we intentionally creating human unreadable gibberish for a modest gain when intermediate gzip compression leads to much greater gains.

I can't count the number of posts I've read where a developer goes gets excited about shaving 40KB off of their massive concatenated global.js soup (incl jquery, mootools, underscore, angular, etc) when they would have had much greater gains by optimizing image/media compression.

As a library developer, minifying sucks. For every version I release I have to produce another copy, run a second set of tests (to make sure minifying didn't break anything), and upload/host an additional file. In addition, any error traces I get from users of the minified version are essentially useless unless they take the time to download and test the non-minified version.

3) Quit loading common libraries locally

If I could teach every budding web developer one lesson it would be how a local cache works and how no variation of concatenation/compression will lead to a faster response than a warm cache.

Yes, loading 3rd party code can potentially lead to a XSS MITM attack (if you don't link via HTTPS). No, your local copy is not more robust than a globally distributed CDN. No, loading yet another unique copy of jquery is not going to be faster than fetching the copy from a warm local cache.

The only justification for loading common libs locally is if the site operates on an air-gapped intranet.

Google indirectly encouraged most of these 'best practices' by giving PR juice to sites with faster page load times.

It would be really interesting to see if/how the search engines will adjust their algorithms as new 'best practices' are established. I know they're starting to incentivize sites that use HTTPS.

What I'd really like to see is sites being penalized for making an unnecessarily large number of external domain requests with the exception of a whitelist of common CDNs.

As for TCP/IP. I'd really like to hear a sound technical justification of why the header checksum calculation for TCP is coupled to the IP header.

[+] laumars|11 years ago|reply
> 2) Stop minifying scripts

Controversial? Maybe, but why are we intentionally creating human unreadable gibberish for a modest gain when intermediate gzip compression leads to much greater gains

gzip compression is already widely supported, however minification does sill have savings (eg removing comments from code - which really have no purpose being transmitted in the first place.

What is actually counter productive for compression is renaming all of your Javascript variable names to single character tokens. But that isn't something I've seen any Javascript minimizers do.

I'd also challenge your point about "human unreadable gibberish" as once the data leaves the server it doesn't need to be human readable. It only needs to be human readable at the development end - which is why minification only happens on live content hosted on the production environment.

[+] toast0|11 years ago|reply
> Yes, loading 3rd party code can potentially lead to a XSS MITM attack (if you don't link via HTTPS). No, your local copy is not more robust than a globally distributed CDN. No, loading yet another unique copy of jquery is not going to be faster than fetching the copy from a warm local cache.

Loading a local copy of jquery is probably not going to be more robust than a globally distributed CDN, but when it fails, it's probably going to be at the same time as the rest of your site, so it's not breaking anything that's not already broken. Who knows what's going to happen to the jQuery CDN in 10 years when jQuery is no longer in style, etc. What if they forget to renew the domain, or their domain registration is hijacked, etc.