(no title)
cestith | 3 days ago
Now, almost everything on the server side is a VM or a container. We have lots of neighbors who want to share the CPU and the RAM, and the RAM is the bigger constraint because the CPUs have 192 cores and each of those cores does a dozen times as much work as a decade ago. Heck, we used to have the memory controller on the motherboard and the last level of cache was a chip or module of SRAM outside the CPU.
We also have a situation now in which the multiple in speed of the CPU over RAM has skyrocketed, but the caches have gotten far larger and much smarter. Smaller things arranged differently in RAM make things run faster because they make better use of the cache.
Now that RAM is expensive, shared, and program and data size and arrangements are bound to cache behavior, optimization can lean heavily into optimizing for RAM again.
Some of these arguments hold true for desktop systems as well.
I have wondered for years when the time will come that instead of such huge and smart caches, someone will just put basically register-speed RAM on the chip and swap to motherboard RAM the way we swap to disk. HBM is somewhere close, being a substrate stacked in the package but not in the CPU die itself.
No comments yet.