top | item 10256462

The Effect of Network and Infrastructural Variables on SPDY’s Performance (2014)

23 points| chetanahuja | 10 years ago |arxiv.org

13 comments

order
[+] ck2|10 years ago|reply
https://docs.google.com/gview?url=http://arxiv.org/pdf/1401....

So would the same apply for HTTP/2.0 since they are so similar?

     In summary, we deduce that SPDY loses its performance gains as a
     website is sharded more. However, these negative results are not ubiquitous
     and vary remarkably depending on the number of page resources. This
     raises a few questions about SPDY deployment. Are the benefits enough for
     designers and admins to restructure their websites to reduce sharding? What
     about third party resources that cannot be consolidated, e.g. ads and social
     media widgets? Can SPDY be redesigned to multiplex across domains? Is
     proxy deployment [29] rewarding and feasible as a temporary solution? The
     success of SPDY (and thereupon HTTP/2.0) is likely to be dependent on
     the answers to precisely these questions.
[+] acdha|10 years ago|reply
That would apply but it's somewhat odd to see the tone of surprise for something which has been widely mentioned as an optimization-turned-antipattern for SPDY or HTTP/2 since at least 2012 or so. I believe at least Chrome has also optimized so addresses which share the same IP (or SSL cert?) will be collapsed into the same existing connection rather than opening new ones.

Opening all of those connections is also something of an anti-pattern even for HTTPS depending on how much data you're exchanging relative to the SSL handshake cost – see e.g.:

https://insouciant.org/tech/network-congestion-and-web-brows...

[+] aavegmittal|10 years ago|reply
hmm… this caught my eye… "Immediately, we see that SPDY is far more adversely affected by packet loss than HTTPS is. This has been anticipated in other work [29] but never before tested. It is also contrary to what has been reported in the SPDY white paper [2], which states that SPDY is better able to deal with loss.”
[+] chetanahuja|10 years ago|reply
Well I've never really bought into the real win from multiplexing HTTP transfer over a single TCP connection. Yes the congestion window management against one server works better now. But what about all the other transfers going over the same narrow mobile connection? The HOL blocking in TCP in presence of even a tiny amount of packet-loss is another sign of almost total mobile blindness applied while designing HTTP2/SPDY.
[+] KaiserPro|10 years ago|reply
Yup, Each dropped packet pauses the entire connection until its retransmitted.

Moving forward to a time where the average webpage is 10-100megs in size[1] in around 5 to 10 years time, SPDY will be the bottleneck, not the network or serving infrastructure.

Of course five to ten years is about the time that HTTP 2 will start to see wide spread adoption.....

Multiplexed TCP is just not a good idea for high bandwith, low latency file delivery. (HTTP is basically a very wordy file system interface)

If you look at any of the systems for moving files about, they all either use a custom UDP protocol, or many streams of TCP. (or rely on being in a LAN)

[1]http://www.websiteoptimization.com/speed/tweak/average-web-p...

[+] rp248|10 years ago|reply
“Ironically the biggest sufferer is Google with a 20.2% increase in ToW"
[+] bexp|10 years ago|reply
Indeed I'm tired to read articles on how awesome SPDY, HTTP/2 is. Why nobody publishes fair benchmarks on various networks with packet loss ?