top | item 40916554

(no title)

flapjack | 1 year ago

One of the solutions they mention is underutilizing links. This is probably a good time to mention my thesis work, where we showed that streaming video traffic (which is the majority of the traffic on the internet) can pretty readily underutilize links on the internet today, without a downside to video QoE! https://sammy.brucespang.com

discuss

order

aidenn0|1 year ago

Packet switching won over circuit switching because the cost-per-capacity was so much lower; if you have to end up over-provision/under-utilize links anyways, why not use circuit switching?

crest|1 year ago

Because the cost is a lot lower. Paying a few percent of overcapacity doesn’t change that.

nine_k|1 year ago

A physical circuit costs a lot, so much more that it's not even funny.

You can deploy a 24-fiber optical cable and allow many thousand virtual circuits to run on it in parallel using packet switching. Usually orders of magnitude more when they share bandwidth opportunistically, because the streams of packets are not constant intensity.

Running thousands of separate fibers / wires would be much more expensive, and having thousands of narrow-band splitters / transcievers, also massively expensive. Phone networks have tried that all, and gladly jumped off the physical circuits ship as soon as they could.

sambazi|1 year ago

> my thesis work, where we showed that streaming video traffic [...] can pretty readily underutilize links on the internet today, without a downside to video QoE!

was slightly at a loss in what exactly needed to be shown here until i clicked the link and came to the conclusion that you re-invented(?) pacing.

https://man7.org/linux/man-pages/man8/tc-fq.8.html

flapjack|1 year ago

I would definitely not say that we re-invented pacing! One version of the question we looked at was: how low a pace rate can you pick for an ABR algorithm, without reducing video QoE? The part which takes work is this "without reducing video QoE" requirement. If you're interested, check out the paper!

clbrmbr|1 year ago

Can you comment on latency-sensitive video (Meet, Zoom) versus latency-insensitive video (YouTube, Netflix)? Is only the latter “streaming video traffic”?

flapjack|1 year ago

We looked at latency-insensitive like YouTube and Netflix (which were a bit more than 50% of internet traffic last year [1]).

I'd bet you could do something similar with Meet and Zoom–my understanding is video bitrates for those services are lower than for e.g. Netflix which we showed are much lower than network capacities. But it might be tricky because of the latency-sensitivity angle, and we did not look into it in our paper.

[1] https://www.sandvine.com/hubfs/Sandvine_Redesign_2019/Downlo...

sulandor|1 year ago

the term “streaming video" usually refers to the fact that the data is sent slower than the link capacity (but intermittently faster than the content bitrate)

op used the term presumably to describe "live content" eg. the source material is not available as a whole (because the recording is not finished); which can be considered a subset of "streaming video"

the sensitivity in regard to transport characteristics stems from the fact that "live content" places an upper bound for the time required for processing and transferring the content-bits to the clients (for it to be considered "live").