Old story, the only thing I'd like to know is if a solution has been found. I thought a RED-based algorithm was the way to go, but it has actually been implemented in modern network switches/routers?
CoDeL is solution which has come out from all of this: http://www.bufferbloat.net/projects/codel/wiki - it can operate without any tuning, as it starts dropping packets when they start spending too much time in the queue.
RED has too many tunables and overfits for a single traffic profile. Fair queuing CoDel just needs a bit more testing to become the Linux default (the current default is a fifo, which is horrible but simple). In addition to that, a lot more work is required to eliminate dark buffers in wireless drivers and other places.
Does anyone have an idea about how this fits in with Cloudflare's new fourth generation servers[0] which up the network card buffers to 16MB from 512KB?
The cards have two links that do 10Gbps in each direction. The 16MB buffer will hold 7ms at worst (double if the links are unbalanced and the buffers shared), which might be okay for web content. Frankly CloudFlare should focus on making their ddos filter more realtime and get over their fear of dropped packets.
Running the Netalyzr tool mentioned in the article on a residential BT Broadband ADSL connection in the UK gives several warnings about unexpected DNS lookups. Checking manually, there is, indeed, some evidence that BT are running a man-in-the-middle attack on DNS requests. Has anyone else noticed this?
$ dig www.google.com 8.8.8.8
[snip]
;; QUESTION SECTION:
;www.google.com. IN A
;; ANSWER SECTION:
www.google.com. 2 IN A 31.55.163.185
www.google.com. 2 IN A 31.55.163.184
[snip]
;; Query time: 47 msec
;; SERVER: 8.8.8.8#53(8.8.8.8)
;; WHEN: Tue Jul 23 14:31:49 2013
;; MSG SIZE rcvd: 160
However the IP range 31.55.162.0 - 31.55.163.255 is owned by "BT Public Internet Service". This strikes me as odd.
8.8.8.8 is Google's public DNS server. Either their servers are resolving www.google.com to a BT owned IP address (perhaps for requests coming from the BT network - which does seem unlikely), or somewhere in between my machine and 8.8.8.8 there's something intercepting the DNS request and spoofing the reply.
If so, I wonder what they're trying to achieve. HTTP traffic to Google redirects to HTTPS by default, and Chrome has HTTPS pinning for the site. If the reports in the newspapers that David Cameron is trying to involve himself in pornographic Google search terms are true then he's not going about it particularly effectively.
This is not terribly uncommon (although not widely talked about). It'll almost certainly be a Google Global Cache setup (https://peering.google.com/about/ggc.html).
You might find it's Google, not BT, that is sending you to a different set of servers depending on your source address (EDIT: likely using EDNS, so you could test this from a non-BT host using a specially crafted DNS query).
For large ISPs, google will send them a rack to put in their facilities, which essentially acts as a cache for google requests by the ISPs customers and reduces the load on the backbone and/or peering network.
You can fix the problem in your home network buy using a recent OpenWRT package and installing qos-scripts, and capping your connection to ~10-20% lower than your provisioned speeds, this will enable fq_codel.
You can also change your tc scheduler on linux kernel 3.3 or greater to fq_codel.
I realize that the actual correct solution, in the opinion of those in the discussion, is new algorithms involving dynamic adjustment of buffers and such (over-simplification, I'm sure).
But in the meantime, is there any way a broadband internet customer can somehow manually adjust the buffers on their router?
Using the netalyzer tool they mentioned, it suggests that my buffer is much too high (and I am indeed been having horrible network performance lately). "We estimate your uplink as having 4300 ms of buffering. This is quite high, and you may experience substantial disruption"
I've got a WNDR3800 which I've sadly not had time to flash and play with yet. I was also fortunate enough to go to Jim Getty's talk at the Linux Plumbers Conference in 2012, and you can find a similar video on Youtube fairly easily (I think it's also linked off the previous website).
This is one of those projects where they could also use the help, just in case anyone out there was looking for a project ;)
One thing you could do is remove any slow links on your network. For most people this will be their wifi. For example, if your WAN connection is 50Mbps and your wifi connection is 30Mbps, oversized buffers will cause latency. But if you switch the wifi to N and now you get a consistent 50Mbps the buffers won't get in the way.
Getting a faster net connection would similarly push the problem further upstream.
Also if you get a Netgear WNDR3700v2 or WNDR3800 you can run the Cerowrt firmware which is being developed as a platform for algorithms to solve bufferbloat.
Darn, you beat me to it! Recommended! It was a good one. I'd like to add that if you want to read the transcripts or get an mp3 of it, the archive page is here:
VJ: "... Yet the economics of the internet tends to ensure..."
This may be the problem. Change the economics, solve the problem. Specifically, do away with the idea of "backbones" for ordinary users. Leave the backbones to research and military networks. That's what they were originally designed for.
Make the (people's) internet more like Baran's original idea. His diagrams did not have backbones. They looked more like "mesh".
A true mesh internet might mean slower speeds for its users, but that design will also reduce latency compared to our current "backboned" internet because there will be fewer "fast to slow" transitions (assuming users all have more or less the same capacity for moving packets).
"Van Jacobson, ...[c]onsidered one of the world's leading authorities on TCP, he helped develop the RED (random early detection) queue management algorithm that has been widely credited with allowing the Internet to grow and meet ever-increasing throughput demands over the years."
ESR spoke to this problem (rather briefly) at a meeting of the Philly Java User Group in the spring of 2012. Fast forward to the 42:00 mark. http://youtu.be/1b17ggwkR60
[+] [-] cliveowen|12 years ago|reply
[+] [-] SudoAlex|12 years ago|reply
[+] [-] Tobu|12 years ago|reply
[+] [-] Judson|12 years ago|reply
[0]:http://blog.cloudflare.com/a-tour-inside-cloudflares-latest-...
[+] [-] Tobu|12 years ago|reply
[+] [-] Dylan16807|12 years ago|reply
[+] [-] rd2c2|12 years ago|reply
8.8.8.8 is Google's public DNS server. Either their servers are resolving www.google.com to a BT owned IP address (perhaps for requests coming from the BT network - which does seem unlikely), or somewhere in between my machine and 8.8.8.8 there's something intercepting the DNS request and spoofing the reply.
If so, I wonder what they're trying to achieve. HTTP traffic to Google redirects to HTTPS by default, and Chrome has HTTPS pinning for the site. If the reports in the newspapers that David Cameron is trying to involve himself in pornographic Google search terms are true then he's not going about it particularly effectively.
[+] [-] samcrawford|12 years ago|reply
You might find it's Google, not BT, that is sending you to a different set of servers depending on your source address (EDIT: likely using EDNS, so you could test this from a non-BT host using a specially crafted DNS query).
[+] [-] infogulch|12 years ago|reply
See http://blogs.broughturner.com/2009/04/googles-peering-and-ca...
[+] [-] virtuallynathan|12 years ago|reply
You can also change your tc scheduler on linux kernel 3.3 or greater to fq_codel.
[+] [-] vdm|12 years ago|reply
[+] [-] jrochkind1|12 years ago|reply
But in the meantime, is there any way a broadband internet customer can somehow manually adjust the buffers on their router?
Using the netalyzer tool they mentioned, it suggests that my buffer is much too high (and I am indeed been having horrible network performance lately). "We estimate your uplink as having 4300 ms of buffering. This is quite high, and you may experience substantial disruption"
[+] [-] npsimons|12 years ago|reply
I've got a WNDR3800 which I've sadly not had time to flash and play with yet. I was also fortunate enough to go to Jim Getty's talk at the Linux Plumbers Conference in 2012, and you can find a similar video on Youtube fairly easily (I think it's also linked off the previous website).
This is one of those projects where they could also use the help, just in case anyone out there was looking for a project ;)
[+] [-] dmm|12 years ago|reply
Getting a faster net connection would similarly push the problem further upstream.
Also if you get a Netgear WNDR3700v2 or WNDR3800 you can run the Cerowrt firmware which is being developed as a platform for algorithms to solve bufferbloat.
[+] [-] jgrahamc|12 years ago|reply
[+] [-] rpledge|12 years ago|reply
[+] [-] millerm|12 years ago|reply
https://www.grc.com/securitynow.htm
[+] [-] X-Istence|12 years ago|reply
See more here: http://attrition.org/errata/charlatan/steve_gibson/
[+] [-] glaze|12 years ago|reply
[+] [-] saintx|12 years ago|reply
http://www.nt.ntnu.no/users/skoge/prost/proceedings/ecc03/pd...
[+] [-] osth|12 years ago|reply
This may be the problem. Change the economics, solve the problem. Specifically, do away with the idea of "backbones" for ordinary users. Leave the backbones to research and military networks. That's what they were originally designed for.
Make the (people's) internet more like Baran's original idea. His diagrams did not have backbones. They looked more like "mesh".
A true mesh internet might mean slower speeds for its users, but that design will also reduce latency compared to our current "backboned" internet because there will be fewer "fast to slow" transitions (assuming users all have more or less the same capacity for moving packets).
[+] [-] mcguire|12 years ago|reply
Huh? Did RED ever see wide deployment?
[+] [-] fecak|12 years ago|reply