top | item 6088992

BufferBloat: What's Wrong with the Internet? (2011)

78 points| ColinWright | 12 years ago |queue.acm.org | reply

26 comments

order
[+] cliveowen|12 years ago|reply
Old story, the only thing I'd like to know is if a solution has been found. I thought a RED-based algorithm was the way to go, but it has actually been implemented in modern network switches/routers?
[+] Tobu|12 years ago|reply
RED has too many tunables and overfits for a single traffic profile. Fair queuing CoDel just needs a bit more testing to become the Linux default (the current default is a fifo, which is horrible but simple). In addition to that, a lot more work is required to eliminate dark buffers in wireless drivers and other places.
[+] Judson|12 years ago|reply
Does anyone have an idea about how this fits in with Cloudflare's new fourth generation servers[0] which up the network card buffers to 16MB from 512KB?

[0]:http://blog.cloudflare.com/a-tour-inside-cloudflares-latest-...

[+] Tobu|12 years ago|reply
The cards have two links that do 10Gbps in each direction. The 16MB buffer will hold 7ms at worst (double if the links are unbalanced and the buffers shared), which might be okay for web content. Frankly CloudFlare should focus on making their ddos filter more realtime and get over their fear of dropped packets.
[+] Dylan16807|12 years ago|reply
Other than the 95% chance this article was posted in response to that one?
[+] rd2c2|12 years ago|reply
Running the Netalyzr tool mentioned in the article on a residential BT Broadband ADSL connection in the UK gives several warnings about unexpected DNS lookups. Checking manually, there is, indeed, some evidence that BT are running a man-in-the-middle attack on DNS requests. Has anyone else noticed this?

  $ dig www.google.com 8.8.8.8

  [snip]
  
  ;; QUESTION SECTION:
  ;www.google.com.			IN	A

  ;; ANSWER SECTION:
  www.google.com.		2	IN	A	31.55.163.185
  www.google.com.		2	IN	A	31.55.163.184

  [snip]

  ;; Query time: 47 msec
  ;; SERVER: 8.8.8.8#53(8.8.8.8)
  ;; WHEN: Tue Jul 23 14:31:49 2013
  ;; MSG SIZE  rcvd: 160
However the IP range 31.55.162.0 - 31.55.163.255 is owned by "BT Public Internet Service". This strikes me as odd.

8.8.8.8 is Google's public DNS server. Either their servers are resolving www.google.com to a BT owned IP address (perhaps for requests coming from the BT network - which does seem unlikely), or somewhere in between my machine and 8.8.8.8 there's something intercepting the DNS request and spoofing the reply.

If so, I wonder what they're trying to achieve. HTTP traffic to Google redirects to HTTPS by default, and Chrome has HTTPS pinning for the site. If the reports in the newspapers that David Cameron is trying to involve himself in pornographic Google search terms are true then he's not going about it particularly effectively.

[+] samcrawford|12 years ago|reply
This is not terribly uncommon (although not widely talked about). It'll almost certainly be a Google Global Cache setup (https://peering.google.com/about/ggc.html).

You might find it's Google, not BT, that is sending you to a different set of servers depending on your source address (EDIT: likely using EDNS, so you could test this from a non-BT host using a specially crafted DNS query).

[+] virtuallynathan|12 years ago|reply
You can fix the problem in your home network buy using a recent OpenWRT package and installing qos-scripts, and capping your connection to ~10-20% lower than your provisioned speeds, this will enable fq_codel.

You can also change your tc scheduler on linux kernel 3.3 or greater to fq_codel.

[+] vdm|12 years ago|reply
A TP Link WDR3500, connected to a dedicated cable or DSL modem (HG612 or 2wire 2700), is a cheap way to do this and get decent Atheros dual band wifi.
[+] jrochkind1|12 years ago|reply
I realize that the actual correct solution, in the opinion of those in the discussion, is new algorithms involving dynamic adjustment of buffers and such (over-simplification, I'm sure).

But in the meantime, is there any way a broadband internet customer can somehow manually adjust the buffers on their router?

Using the netalyzer tool they mentioned, it suggests that my buffer is much too high (and I am indeed been having horrible network performance lately). "We estimate your uplink as having 4300 ms of buffering. This is quite high, and you may experience substantial disruption"

[+] npsimons|12 years ago|reply
It's not an end to end perfect solution, but you may be interested in CeroWRT: http://www.bufferbloat.net/projects/cerowrt

I've got a WNDR3800 which I've sadly not had time to flash and play with yet. I was also fortunate enough to go to Jim Getty's talk at the Linux Plumbers Conference in 2012, and you can find a similar video on Youtube fairly easily (I think it's also linked off the previous website).

This is one of those projects where they could also use the help, just in case anyone out there was looking for a project ;)

[+] dmm|12 years ago|reply
One thing you could do is remove any slow links on your network. For most people this will be their wifi. For example, if your WAN connection is 50Mbps and your wifi connection is 30Mbps, oversized buffers will cause latency. But if you switch the wifi to N and now you get a consistent 50Mbps the buffers won't get in the way.

Getting a faster net connection would similarly push the problem further upstream.

Also if you get a Netgear WNDR3700v2 or WNDR3800 you can run the Cerowrt firmware which is being developed as a platform for algorithms to solve bufferbloat.

[+] rpledge|12 years ago|reply
Steve Gibson did a great job explaining this last year on his podcast http://twit.tv/show/security-now/345
[+] millerm|12 years ago|reply
Darn, you beat me to it! Recommended! It was a good one. I'd like to add that if you want to read the transcripts or get an mp3 of it, the archive page is here:

https://www.grc.com/securitynow.htm

[+] osth|12 years ago|reply
VJ: "... Yet the economics of the internet tends to ensure..."

This may be the problem. Change the economics, solve the problem. Specifically, do away with the idea of "backbones" for ordinary users. Leave the backbones to research and military networks. That's what they were originally designed for.

Make the (people's) internet more like Baran's original idea. His diagrams did not have backbones. They looked more like "mesh".

A true mesh internet might mean slower speeds for its users, but that design will also reduce latency compared to our current "backboned" internet because there will be fewer "fast to slow" transitions (assuming users all have more or less the same capacity for moving packets).

[+] mcguire|12 years ago|reply
"Van Jacobson, ...[c]onsidered one of the world's leading authorities on TCP, he helped develop the RED (random early detection) queue management algorithm that has been widely credited with allowing the Internet to grow and meet ever-increasing throughput demands over the years."

Huh? Did RED ever see wide deployment?

[+] fecak|12 years ago|reply
ESR spoke to this problem (rather briefly) at a meeting of the Philly Java User Group in the spring of 2012. Fast forward to the 42:00 mark. http://youtu.be/1b17ggwkR60