top | item 3007718

Paper: It's Time for Low Latency

63 points| necenzurat | 14 years ago |scs.stanford.edu | reply

21 comments

order
[+] jules|14 years ago|reply
This is an interesting article, but as usual it will probably take a bit longer than predicted before all of this is in place.

> While the speed of light limits latency in wide area networks, electrons can traverse 100m of copper cables and back in about 1µs

It's not the electrons that move that quickly, it's the information. Think of it like a filled garden hose. If you turn on the tap the water will come out at the other side very quickly, but the time it takes for the water to go from the faucet to the other end of the hose is much longer. Similarly the drift velocity of electrons in a copper wire is very low, generally less than a millimeter per second.

[+] billswift|14 years ago|reply
The drift velocity isn't important. The velocity under an EMF (aka, a signal) is much closer to the speed of light. An individual electron doesn't move far, the first will displace another, so on to the other end of the conductor, but the electrons are moving very fast when a signal is imposed.

ADDED: It occurred to me that I better point out that "drift velocity" is the normal movement of electrons in a conductor without an imposed voltage, that is when the wire is just sitting there.

[+] zdw|14 years ago|reply
A lot of this is bufferbloat - modern systems tend to be strangled by designs created back when a server had 1-2 cores, often not tightly coupled, and I/O vendors realized that by reducing the number of interrupts by having a big buffer resulted in a performance win for CPU bound processes, but incurred a latency penalty.

On modern systems with 4-16 cores per machine, there is more than enough CPU to spare in the vast majority of cases. Therefore, binding I/O interrupts to a specific core and reducing buffer size can greatly reduce latency, at a cost of more time in the driver code, but those CPU resources wouldn't be used otherwise.

[+] tmurray|14 years ago|reply
As the paper mentions, latency of this magnitude is already available to HPC using RDMA on top of InfiniBand or 10GigE. I don't think 10GigE will be a special purpose interconnect that much longer and IP over OFED (OpenFabrics, the very low-level API to handle these kinds of devices) is terrible, so I wouldn't be surprised if we see some applications start looking at OFED (probably initially via MPI because that exists and is stable, but eventually via new communications libraries) for achieving very low latency using RDMA and polling instead of OS primitives.
[+] davidyz|14 years ago|reply
Does the average programmer find RDMA difficult to use? Explicit buffer management and reuse / reregistration seem like a rather heavy burden on the application. I think the paper provides good value in raising that abstraction to something more usable.
[+] dr_rezzy|14 years ago|reply
Is it me or is it a growing trend that 'academic' papers are omitting their publish date. Its frustrating to read a paper without this context.
[+] wmf|14 years ago|reply
If you look at the URL, it's from HotOS '11.
[+] cpeterso|14 years ago|reply
I've been frustrated by this for a long time. You would think the authors would want to document when they published new findings. This paper doesn't have a publication date, but its bibliography provides publication dates for cited papers!
[+] wmf|14 years ago|reply
I think this is a good paper overall, but they're not taking the state of the art as their baseline. They say 5-10 us may be possible in a few years, but <4 us is already on the market: "In TCP testing, server-to-switch-to-server mean latency was as low as 3.6 microseconds." [1] (Perhaps the authors are unaware of OpenOnload because it came from industry.) Complaining that "normal" switches, NICs, and software are behind the state of the art doesn't seem like a topic of research.

[1] http://solarflare.com/09-14-11-Solarflare-Arista-Complete-Ul...

[+] rxin|14 years ago|reply
The lead professor (John Ousterhout) has extensive experience in the industry. The project also involves a number of industrial collaborators (e.g. Jeff Dean from Google).

The 3.6 us you cited is only the network latency. Ousterhout and his team are working on 5us RPC calls. Why is this hard? If the program context switches three times, it will miss the 5us threshold.

[+] roxtar|14 years ago|reply
TLDR version goes something like this "There is hardware support for low latency networks, thus the OS community should make use of it."

Overall the paper is weak. It contains arguments without experiments. Section 5 (which I think is the main section) claims a lot of things without presenting data, experiments and evaluation.

[+] xtacy|14 years ago|reply
> It contains arguments without experiments.

Unfortunately, that's due to the nature of the conference. HotOS/Nets/etc. Such conferences encourage such papers to stir up some discussion.

If you're interested in knowing more about what they're doing, you can check the website where they document everything: http://fiz.stanford.edu:8081/display/ramcloud/Home.

[+] schlomie|14 years ago|reply
> While the speed of light limits latency in wide area networks, electrons can traverse 100m of copper cables and back in about 1µs

You can reduce latency by making the cable runs straight, vs. winding around underneath streets. That actually accounts for a significant amount of time, and might be an interesting problem to research.

[+] 6ren|14 years ago|reply
So, theoretically, wireless should have lower latency, on this factor. Line of sight.
[+] flogic|14 years ago|reply
Meh. It's pretty clear to me that with the mobile push low latency isn't happening. I suspect it's more likely that we'll encounter latency hiding techniques.
[+] lukesandberg|14 years ago|reply
You should read the article. The article discusses how low latency can be achieved in a data center. The specifically discuss why they are interested in intra-datacenter latency vs. attacking latency across the internet as a whole.
[+] alexandros|14 years ago|reply
I'm curious why an application-layer concept such as RPC is making an appearance on a paper that discusses network architecture. Perhaps they are using RPC as a short hand for any roundtrip between two machines?

In any case, some background to the precise definition of RPC and why I find the use here objectionable: http://www.infoq.com/presentations/vinoski-rpc-convenient-bu...

[+] xtacy|14 years ago|reply
They discuss application layer concept because that's (end--end latency is) what ultimately matters. If you have a network with 1us RTT and the network stack contributes 10us latency, you have to architect the OS so that latency is minimised.