Apple did a lot right with this change to make memory fast. I can see AMD and Intel adopting a similar strategy and putting something like 16 GB of dram on chip. Need more than that? Then add “L2 dram” on an external dimm. 16 GB will cover most people’s use cases and with the ability to add L2 dram the high memory usage cases are covered too. (I remember when you could buy cards with L2 cache on them back in the day. 486 I think had them. This is just taking it to the next level.)
yvdriess|2 years ago
Intel already launched a processor with 16GB on-package MCDRAM in 2016 (Knight's Landing Xeon Phi). You can even buy an Intel Xeon with 64GB HBM2 today. Nvidia likewise has been packaging HBM with their server GPUs.
Embedded DRAM (eDRAM) been used for long time in the mobile and console space. e.g. IBM's POWER7 (e.g. Nintendo Gamecube) and Intel Haswell products. However, using a logic process node to make DRAM cells is wasteful. Packaging technologies have advanced sufficiently that you now regularly see regular DRAM dies (LPDDR, HBM) being put on-package.
But all of that is packaging and manufacturing technologies. We're still taking to DRAM over a memory bus like we're still living in the '80s. The true innovation I'm looking out for is for a company to stick its neck out and use a different communication standard to talk with the DRAM modules. Something like the CLX.mem standard, which is used in the server space to talk to memory expansion modules.
kuschku|2 years ago
The amount of people that need memory speeds and latencies beyond what SO-DIMM and CAMM can handle, but only need 16GB of RAM is absolutely tiny.
PedroBatista|2 years ago
tjoff|2 years ago
Is there a good overview of how much of a benefit the onchip ram is?
ZiiS|2 years ago
pulse7|2 years ago