top | item 46508105

(no title)

eliasdejong | 1 month ago

> 14 GB/s

Yes, those numbers are real but only in very short bursts of strictly sequential reads, sustained speeds will be closer to 8-10 GB/s. And real workloads will be lower than that, because they contain random access.

Most NVMe drivers on Linux actually DMA the pages directly into host memory over the PCIe link, so it is not actually the CPU that is moving the data. Whenever the CPU is involved in any data movement, the 6 GB/s per core limit still applies.

discuss

order

jauntywundrkind|1 month ago

I feel like you are pulling all sorts of nonsense out of nowhere. Your numbers seem all made up. 6GB/s seems outlandishly tiny. Your justifications are not really washing. Zen4 here shows single core as, at absolute worst behavior, dropping to 57GB/s. Basically 10x what you are spinning. You are correct in that memory limits are problematic, but we also have had technology like Intel's Direct Data IO (2011) that lets the CPU talk to peripherals without having to go through main memory at all (big security disclosure on that in 2019, yikes). AMD is making what they call "Smart Data Cache Injection" which similarly makes memory speed not gating. So even if you do divide the 80GB/s memory speed across 16 chips on desktop and look at 5GB/s, that still doesn't have to tell the whole story. https://chipsandcheese.com/p/amds-zen-4-part-2-memory-subsys... https://nick-black.com/dankwiki/index.php/DDIO

As for SSD, for most drives, it's true true that they cannot sustain writes indefinitely. They often write in SLC mode then have to rewrite, re-pack things into denser storage configurations that takes more time to write. They'll do that in the background, given the chance, so it's often not seen. But write write write and the drive won't have the time.

Thats very well known, very visible, and most review sites worth a salt test for it and show that sustained write performance. Some drives are much better than others. Even still, an Phison E28 will let you keep writing at 4GB/s until just before the drive is full full full. https://www.techpowerup.com/review/phison-e28-es/6.html

Drive reads don't have this problem. When review sites benchmark, they are not benchmarking some tiny nanosliver of data. Common benchmark utilities will test sustained performance, and it doesn't suddenly change 10 seconds in or 90 seconds in or whatever.

These claims just don't feel like they're straight to me.

zozbot234|1 month ago

6 GB/s may well be in the ballpark for throughput per core whenever all cores are pegged to their max. Yes, that's equivalent to approx. 2-3 bytes per potential CPU cycle; a figure reminiscent of old 8-bit processors! Kind of puts things in perspective when phrased like that: it means that for wide parallel workloads, CPU cycle-limited compute really only happens with data already in cache; you're memory bandwidth limited (and should consider the iGPU) as soon as you go out to main RAM.

yusyusyus|1 month ago

DMAing as opposed to what?

loeg|1 month ago

Issuing extremely slow PCIe reads? GP probably meant “all” rather than “most.”

saidnooneever|1 month ago

i think he means simply the cpu speed bottleneck doesnt apply as it related to dma which doesnt 'transfer data' via the cpu directly.

its phrased a bit weird.

imtringued|1 month ago

Letting the CPU burn cycles by accessing memory mapped device memory instead of using DMA like in the good old days? What even is this question?

yxhuvud|1 month ago

What? Nvme dont care about sequential access. If that slows you down then it is the fault of the operating system and the APIs it provide.

In Linux you can use direct IO or RWF_UNCACHED to avoid paying extra for unwanted readahead.