(no title)
softfalcon | 27 days ago
Yeah, this has been my experience with low-overhead streams as well.
Interestingly, I see a ubiquity of this "open more streams to send more data" pattern all over the place for file transfer tooling.
Recent ones that come to mind have been BackBlaze's CLI (B2) and taking a peek at Amazon's SDK for S3 uploads with Wireshark. (What do they know that we don't seem to think we know?)
It seems like they're all doing this? Which is maybe odd, because when I analyse what Plex or Netflix is doing, it's not the same? They do what you're suggesting, tune the application + TCP/UDP stack. Though that could be due to their 1-to-1 streaming use case.
There is overhead somewhere and they're trying to get past it via semi-brute-force methods (in my opinion).
I wonder if there is a serialization or loss handling problem that we could be glossing over here?
PunchyHamster|27 days ago
I used B2 as third leg for our backups and pretty much had to give rclone more connections at once because defaults were nowhere close to saturating bandwidth
digiown|27 days ago
KaiserPro|27 days ago
When we were doing 100TB backups of storage servers we had a wrapper that run multiple rsyncs over the file system, that got throughput up to about 20gigbits a second over lan
akdev1l|27 days ago
cuz in my experience no one is doing that tbh
slightlygrilled|27 days ago
It’s base line tuning seems to just assume large files and does no auto scaling and it’s mostly single threaded.
Then even when tuning it’s still painfully slow, again seemly limited by its cpu processing and mostly on a single thread, highly annoying.
Especially when you’re running it on a high core, fast storage, large internet connection machine.
Just feels like there is a large amount of untapped potential in the machines…