(no title)
creiht
|
5 months ago
It is probably worth noting that most of the listed storage systems (including S3) are designed to scale not only in hard drives, but horizontally across many servers in a distributed system. They really are not optimized for a single storage node use case. There are also other things to consider that can limit performance, like what does the storage back plane look like for those 80 HDDs, and how much throughput can you effectively push through that. Then there is the network connectivity that will also be a limiting factor.
crabique|5 months ago
The link is a 10G 9K MTU connection, the server is only accessed via that local link.
Essentially, the drives being HDD are the only real bottleneck (besides the obvious single-node scenario).
At the moment, all writes are buffered into the NVMes via OpenCAS write-through cache, so the writes are very snappy and are pretty much ingested at the rate I can throw data at it. But the read/delete operations require at least a metadata read, and due to the very high number of small (most even empty) objects they take a lot more time than I would like.
I'm willing to sacrifice the write-through cache benefits (the write performance is actually an overkill for my use case), in order to make it a little more balanced for better List/Read/DeleteObject operations performance.
On paper, most "real" writes will be sequential data, so writing that directly to the HDDs should be fine, while metadata write operations will be handled exclusively by the flash storage, thus also taking care of the empty/small objects problem.
edude03|5 months ago
? on the low end a single HD can deliver 100MB/s, 80 can deliver 8,000MB/s, a single nvme can do 700MB/s and you have 4, 2,800MB/s - a 10Gb link can only do 1000MB/s, so isn't your bottle neck Network and then probably CPU?
dardeaup|5 months ago