(no title)
ayende | 5 months ago
The io_uring code looks like it is doing all the fetch work in the background (with 6 threads), then just handing the completed buffers to the counter.
Do the same with 6 threads that would first read the first byte on each page and then hand that page section to the counter, you'll find similar performance.
And you can use both madvice / huge pages to control the mmap behavior
mrlongroots|5 months ago
Even if you had a million SSDs and somehow were able to connect them to a single machine somehow, you would not outperform memory, because the data needs to be read into memory first, and can only then be processed by the CPU.
Basic `perf stat` and minor/major faults should be a first-line diagnostic.
johncolanduoni|5 months ago
Also, while we’re being annoyingly technical, a lot of server CPUs can DMA straight to the L3 cache so your proof of impossibility is not correct.
alphazard|5 months ago
This is an oversimplification. It depends what you mean by memory. It may be true when using NVMe on modern architectures in a consumer use case, but it's not true about computer architecture in general.
External devices can have their memory mapped to virtual memory addresses. There are some network cards that do this for example. The CPU can load from these virtual addresses directly into registers, without needing to make a copy to the general purpose fast-but-volatile memory. In theory a storage device could also be implemented in this way.
hinkley|5 months ago
It’s only true if you need to process the data before passing it on. You can do direct DMA transfers between devices.
In which case one needs to remember that memory isn’t on the CPU. It has to beg for data just about as much as any peripheral. It uses registers and L1, which are behind two other layers of cache and an MMU.
lucketone|5 months ago
That’s the point: “mmap” is slow because it is serial.
arghwhat|5 months ago
(That doesn't undermine that io_uring and disk access can be fast, but it's comparing a lazy implementation using approach A with a quite optimized one using approach B, which does not make sense.)
guenthert|5 months ago
arunc|5 months ago
jared_hulbert|5 months ago