(no title)
PiratesScorn | 2 years ago
As per the blog, the cluster is now in a 6+2 EC configuration for production which gives ~7PiB usable. Expensive yes, but well worth it if this is the scale and performance required.
PiratesScorn | 2 years ago
As per the blog, the cluster is now in a 6+2 EC configuration for production which gives ~7PiB usable. Expensive yes, but well worth it if this is the scale and performance required.
up2isomorphism|2 years ago
To put it into perspective there are 68 nodes with 98 hard thread each, means only 1000/7000 = 140MB/s per thread or 280MB/s per core, and that’s not that impressive, to be honest.
markhpc|2 years ago
Large reads tend to require the least CPU of all of the tests that we ran in the post. This is especially true in a 3X replication scenario where reads are serviced by a single OSD like in the 1 TiB/s test. CPU is far more important for small random writes, and also can be important when using erasure coding and/or msgr level encryption.
So the premise that you can only achieve 280MB/s per core is misleading. This cluster wasn't bottlenecked by the CPUs for large reads. Having said that, CPU makes up only a small portion of the overall cost for an NVMe deployment like this. Investing a relatively small amount of money to achieve a higher core to nvme ratio provides a better balance across all workloads and more flexibility when enabling features that consume additional CPU.
mrunkel|2 years ago
This reads to me (and the OP) that you are saying the purpose of this "insanely expensive cluster" was to "show a benchmark."
That's what OP is addressing in his response. No where do you mention anything about performance.