(no title)
user2342 | 1 year ago
for try await byte in bytes { ... }
for me reads like the time/delta is determined for every single byte received over the network. I.e. millions of times for megabytes sent. Isn't that a point for optimization or do I misunderstand the semantics of the code?
samatman|1 year ago
spenczar5|1 year ago
At that point, even these slow methods are using about 0.5ms per million bytes, so it should be good up to gigabit speeds.
If that’s not fast enough, then sample every million bytes. Or, if the complexity is worth it, sample in an adaptive fashion.
metaltyphoon|1 year ago
again.
saagarjha|1 year ago
ajross|1 year ago
[1] On x86 linux, it's just a quick call into the vdso that reads the TSC and some calibration data, dozen cycles or so.
jerf|1 year ago
marcosdumay|1 year ago
But I imagine the time reading ones aren't as much optimized. People normally do not call them all the time.