top | item 38469967

(no title)

Xorlev | 2 years ago

You'd think so, but for datacenter workloads it's absolutely common, especially if you're just scheduling a bunch of containers together. Computation also doesn't happen in a vacuum, unless you're doing some fairly trivial processing you're likely loading quite a bit of memory, perhaps many multiples of what your business logic is actually doing.

It's also not as easy as GB/s/core, since cores aren't entirely uniform, and data access may be across core complexes.

discuss

order

jltsiren|2 years ago

I'm not sure what you mean by datacenter workloads.

The work I do could be called data science and data engineering. Outside some fairly trivial (or highly optimized) sequential processing, the CPU just isn't fast enough to saturate memory bandwidth. For anything more complex, the data you want to load is either in cache (and bandwidth doesn't matter) or it isn't (and you probably care more about latency).

terlisimo|2 years ago

I had these two dual-18-core xeon web servers with seemingly identical hardware and software setup but one was doing 1100 req/s and the other 500-600.

After some digging, I've realized that one had 8x8GB ram modules and the slower one had 2x32GB.

I did some benchmarking then and found that it really depends on the workload. The www app was 50% slower. Memcache 400% slower. Blender 5% slower. File compression 20%. Most single-threaded tasks no difference.

The takeaway was that workloads want some bandwidth per core, and shoving more cores into servers doesn't increase performance once you hit memory bandwidth limits.

mirsadm|2 years ago

This seems very unlikely. The CPU is almost always bottlenecked by memory.