top | item 46894444

(no title)

nh2 | 25 days ago

> But the people you responded to were talking about slowdowns that exist in general, not just ones that apply directly to the post.

I think that's incorrect though. These slowdowns do not exist in general (see my next reply where I run rsync and it immadiately maxes out my 10 Gbit/s).

I think original poster digiown is right with "Note there is no intrinsic reason running multiple streams should be faster than one [EDIT: 'at this scale']. It almost always indicates some bottleneck in the application". In this case it's the user running rsync as a serially-reading program reading from a network mount.

> rsync having trouble doing >1Gbps over the network

rsync copies at 10 Gbit/s without problem between my machines.

Though I have to give `-e 'ssh -c aes256-gcm@openssh.com'` or aes128-gcm, otherwise encryption bottlenecks at 5 Gbit/s with the default `chacha20-poly1305@openssh.com`.

> I don't see why you're saying this.

Because of the part you agreed making sense: It read each file with the sequence `open()/read()/.../read()/close()`, but those files are on the network mount ("/Volumes/mercury"), so each `read()` of size `#define IO_BUFFER_SIZE (32*1024)` is a network roundtrip.

discuss

order

Dylan16807|25 days ago

I see, so you're saying the file end of rsync is forced to wait for the network because the filesystem itself waits, not the network end of rsync. That makes sense.

Though I wonder what the actual delay is. The numbers in the post implied several milliseconds, enough to maybe account for 30 seconds of the 8 minutes. But maybe changing files resets the transfer speed a bunch.

nh2|25 days ago

From which part do you take the "several milliseconds"?

If I assume 0.2ms ping and each rsync read() is a roundtrip, I arrive at 6.4 minutes = 62955918871 B / (321024 B) 0.0002 s / (60 s/min).