Aside from the Rust aspect (which is cool!), I can't believe we've come this far and still don't have low-latency video conferencing. Maybe I'm overly sensitive, but people talking over each other and the lack of conversational flow drives me crazy with things like hangouts.
Aengeuad|5 years ago
>I can send an IP packet to Europe faster than I can send a pixel to the screen. How f’d up is that?
and to relate to the other post about landlines: https://twitter.com/ID_AA_Carmack/status/992778768417722368
>I made a long internal post yesterday about audio latency, and it included “Many people reading this are too young to remember analog local phone calls, and how the lag from cell phones changed conversations.”
Artlav|5 years ago
Is there somewhere to read about the changes in question?
I'm old enough to remember extensive use of analog landlines, and can't really think of any difference to a cellphone other than audio quality.
unknown|5 years ago
[deleted]
lostmsu|5 years ago
pedrocr|5 years ago
Either Cisco needed to bring down the cost massively to expand access or someone needed to build it in major cities and bill by the hour to compete against flying. None of those happened so it stayed a niche. Compared to those experiences more than a decade ago the common VC is still very slowly catching up. Part of it is setup, like installing VC rooms with 2 smaller TVs side by side instead of one large one so you can see the document and the other people at decent sizes. But part of it is still the technology. Those "telepresences" were almost surely on a dedicated link running on the telecom core network that guaranteed quality instead of routing through the internet and randomly failing. I suspect getting really low latency will require that kind of telecom level QoS otherwise you'll be increasing buffer sizes to avoid freezes.
ponker|5 years ago
blahbhthrow3748|5 years ago
bob1029|5 years ago
Something to consider is that there are alternative techniques to interframe compression. Intraframe compression (e.g. JPEG) can bring your encoding latency per frame down to 0~10ms at the cost of a dramatic increase in bandwidth. Other benefits include the ability to instantly draw any frame the moment you receive it, because every single JPEG contains 100% of the data. With almost all video codecs, you must have some prior # of frames in many cases to reconstitute a complete frame.
For certain applications on modern networks, intraframe compression may not be as unbearable an idea as it once was. I've thrown together a prototype using LibJpegTurbo and I am able to get a C#/AspNetCore websocket to push a framebuffer drawn in safe C# to my browser window in ~5-10 milliseconds @ 1080p. Testing this approach at 60fps redraw with event feedback has proven that ideal localhost roundtrip latency is nearly indistinguishable from native desktop applications.
The ultimate point here is that you can build something that runs with better latency than any streaming offering on earth right now - if you are willing to make sacrifices on bandwidth efficiency. My 3 weekend project arguably already runs much better than Google Stadia regarding both latency and quality, but the market for streaming game & video conference services which require 50~100 Mbps (depending on resolution & refresh rate) constant throughput is probably very limited for now. That said, it is also not entirely non-existent - think about corporate networks, e-sports events, very serious PC gamers on LAN, etc. Keep in mind that it is virtually impossible to cheat at video games delivered through these types of streaming platforms. I would very much like to keep the streaming gaming dream alive, even if it can't be fully realized until 10gbps+ LAN/internet is default everywhere.
phoboslab|5 years ago
I was able to get latency down to 50ms, streaming to a browser using MPEG1[1]. The latency is mostly the result of 1 frame (16ms) delay for a screen capture on the sender + 2-3 frames of latency to get through the OS stack to the screen at the receiving end. En- and decoding was about ~5ms. Plus of course the network latency, but I only tested this on a local wifi, so it didn't add much.
[1] https://phoboslab.org/log/2015/07/play-gta-v-in-your-browser...
vlovich123|5 years ago
All the benefits of efficient codecs, more manageable handling of the latency downsides.
The challenges you'll run into instantly with JPEG is that the file size increase & encoding/decoding time on large resolutions outstrips any benefits you get in your limited tests. For video game applications you have to figure out how you're going to pipeline your streaming more efficiently than transferring a small 10 kb image as otherwise you're transferring each full uncompressed frame to the CPU which is expensive. Doing JPEG compression on the GPU is probably tricky. Finally decode is the other side of the problem. HW video decoders are embarrassingly fast & super common. Your JPEG decode is going to be significantly slower.
* EDIT: For your weekend project are you testing it with cloud servers or locally? I would be surprised if under equivalent network conditions you're outperforming Stadia so careful that you're not benchmarking local network performance against Stadia's production on public networks perf.
cossatot|5 years ago
jstrong|5 years ago
izacus|5 years ago
(Be gentle on your coworkers and use cabled headphones.)
bufferoverflow|5 years ago
AptX low latency codec adds only 40ms max.
Just buy headphones with good low latency support. They aren't even expensive anymore.
Filligree|5 years ago
Why can't I have both? Wifi doesn't seem to have this latency problem.
GuiA|5 years ago
The thing is that when we talk in a room, sound will take <10ms to reach my ears from your mouth. This is what "enables" all of the human turn taking cues in conversation (eye contact, picking up whether a sentence is about to end/whether it's a good time to chime in/etc) - I've been looking for work from people who've tried to see at what point things start feeling really bad (is it 10ms, or 50ms?), but haven't found much so far. No matter what it is though, it's likely that long distance digital communications just cannot match it.
See also this interesting comment about the feeling of "closeness" from phone copper wires:
https://news.ycombinator.com/item?id=22931809
Landlines were so fast and so "direct" in their latency (where distance correlates very directly with time, due to a lack of "hops") that local phone calls were faster than the speed of sound across a table, and for a bit after they came out--before people generally got used to seemingly random latency--local calls felt "intimate", like as if you were talking to someone in bed with their head right next to you; I also have heard stories of negotiators who had gotten really tuned to analyzing people's wait times while thinking that long distance calls were confusing and threw them off their game.
jokoon|5 years ago
It seems normal phones are able to do it, though. At least it seems normal phones suffer less latency problem.
In a way, simplicity in technology often means better performance.
josh2600|5 years ago
eru|5 years ago
Digital communication could cheat, though!
There's a lot of latency hiding you can do, if you can predict well enough what's coming next. Humans are fairly predictable most of the time.
sbierwagen|5 years ago
wallflower|5 years ago