top | item 39125471

(no title)

markhpc | 2 years ago

Hi, Author here.

Large reads tend to require the least CPU of all of the tests that we ran in the post. This is especially true in a 3X replication scenario where reads are serviced by a single OSD like in the 1 TiB/s test. CPU is far more important for small random writes, and also can be important when using erasure coding and/or msgr level encryption.

So the premise that you can only achieve 280MB/s per core is misleading. This cluster wasn't bottlenecked by the CPUs for large reads. Having said that, CPU makes up only a small portion of the overall cost for an NVMe deployment like this. Investing a relatively small amount of money to achieve a higher core to nvme ratio provides a better balance across all workloads and more flexibility when enabling features that consume additional CPU.

discuss

order

No comments yet.