I get that this is aimed at major backhauls, I'm just frustrated that for the next 10 years we'll be talking about greater and greater bandwidth and not lesser latency. Consumer internet is going to suck for a long time.
Theoretically, space based networking can be lower latency then ground based (above a minimum distance) because of the differences in the speed of light in the two mediums.
Also something I haven't seen discussed yet is space servers. This has to be a great speed opportunity to orbit some server system a few hundred km above the internet sats and laser link to those rather than retrieve data back from earth.
60ms @ 1gbit (120 MiB) -> 7.2 MiB TCP buffer size required for bandwidth saturation. That’s more than the max buffer size a run-of-the-mill modern OS gives you by default, meaning your download speeds will be capped unless you fiddle with sysctl or equivalent. And that’s within the same continent assuming minimal jitter and packet loss.
Another big one is optimizing applications for number of round trips, which most people don’t do, and it can be surprisingly hard to do so.
I am a throughput freak, but you’d be surprised at the importance of latency, even (especially?) for throughput, in practice. It’s absolutely not just a “real-time” thing.
If you’re on Linux, you can use ‘netem’ to emulate packet loss, latency etc. Chrome dev tools have something similar too, it’s an eye opening experience if you’re used to fast reliable internet.
You have around 150ms of one-way latency before perceptual quality starts to nose dive (see https://www.itu.int/rec/T-REC-G.114-200305-I). This includes the capture, processing, transmission, decoding, and reproduction of signals. Throw in some acoustic propagation delay at either end in room scale systems and milliseconds count.
>but outside of pro gaming and HFT
But that only allows for current technologies/consumer habits and not the future.
Gaming explodes into online VR where latency is incredibly noticeable, I still think virtual shopping will become a thing, everyone buys crap online now but it's nice to be able to "see" a product before you buy it.
As we start getting autonomous vehicles/delivery drones etc they can all make use of the lowest latency link possible.
Lower latency also usually means (but not always) lower power usage aka the "race to sleep".
It would also enable better co-ordination between swarms of x, whether it's aforementioned delivery drones, missile defence systems (which launch site is better to launch from, stationary or mobile) etc.
But also just human ingenuity and invention, we should always try to make it faster just because we can, at the end of the day.
Yeah, marketers loves to say that because there is a latency gain for the wireless part of the request travel. The more the information you request is close to the antenna, the more you’ll benefit of this improvement. However in real (actual) condition a request travel hundreds or thousands of kilometers through many nodes, each one adding up a bit of latency. So in the end you may feel a difference if you use VR in SF or make HST in central London. You won’t notice the gain if you’re trying to load a page hosted far away from you.
In fact latency is way more impacted by the web neutrality but that’s another subject.
If you live in a major city close to the target datacenter, maybe. I have an excellent last mile (500 Mbps symmetrical FTTH) and pings to sites like google.com still go above 300 ms. 5G wouldn't help me at all, probably the opposite. The vast majority of that time is spent in data bouncing around the world through very inefficient routes.
The distance from my city to new york is roughly 5600km which, given the slightly slower speed of light in a optical fiber, would give a minimum ping of ~27ms (one way).
Given the same distance but in a vacuum, it would be around 18ms.
But if I ping a server in NY right now, my actual ping would be around 100ms (can easily go up to 200ms). Most of the latency is created because of the routing. The farther you are from important peering hub, the worse your ping is usually going to be.
When I was living in French Guyana, it was basically impossible to have a ping less than 120ms from anywhere in the world.
Lets wait for Quantum entangled networks. Just 8 pairs of particles separated at ISP level, we have instant hops of data limited in speed by the oscillator.
throw_nbvc1234|2 years ago
https://people.eecs.berkeley.edu/~sylvia/cs268-2019/papers/s...
Gustomaximus|2 years ago
amelius|2 years ago
0cf8612b2e1e|2 years ago
New York to LA is ~60msec
New York to Hong Kong is ~250msec
klabb3|2 years ago
Another big one is optimizing applications for number of round trips, which most people don’t do, and it can be surprisingly hard to do so.
I am a throughput freak, but you’d be surprised at the importance of latency, even (especially?) for throughput, in practice. It’s absolutely not just a “real-time” thing.
If you’re on Linux, you can use ‘netem’ to emulate packet loss, latency etc. Chrome dev tools have something similar too, it’s an eye opening experience if you’re used to fast reliable internet.
kimburgess|2 years ago
You have around 150ms of one-way latency before perceptual quality starts to nose dive (see https://www.itu.int/rec/T-REC-G.114-200305-I). This includes the capture, processing, transmission, decoding, and reproduction of signals. Throw in some acoustic propagation delay at either end in room scale systems and milliseconds count.
fennecfoxy|2 years ago
Gaming explodes into online VR where latency is incredibly noticeable, I still think virtual shopping will become a thing, everyone buys crap online now but it's nice to be able to "see" a product before you buy it.
As we start getting autonomous vehicles/delivery drones etc they can all make use of the lowest latency link possible.
Lower latency also usually means (but not always) lower power usage aka the "race to sleep".
It would also enable better co-ordination between swarms of x, whether it's aforementioned delivery drones, missile defence systems (which launch site is better to launch from, stationary or mobile) etc.
But also just human ingenuity and invention, we should always try to make it faster just because we can, at the end of the day.
fomine3|2 years ago
aziaziazi|2 years ago
_joel|2 years ago
aziaziazi|2 years ago
In fact latency is way more impacted by the web neutrality but that’s another subject.
5e92cb50239222b|2 years ago
orjandus|2 years ago
maeln|2 years ago
But if I ping a server in NY right now, my actual ping would be around 100ms (can easily go up to 200ms). Most of the latency is created because of the routing. The farther you are from important peering hub, the worse your ping is usually going to be.
When I was living in French Guyana, it was basically impossible to have a ping less than 120ms from anywhere in the world.
zeristor|2 years ago
onion2k|2 years ago
capsicum|2 years ago
empyrrhicist|2 years ago