Non-binary increments in capacity, as opposed to a doubling in capacity or having 1 extra address bit. From article:
Instead of jumping straight from a 32GB DIMM to a 64GB one, DDR5, for the first time, allows for half steps in memory density. You can now have DIMMs with 24GB, 48GB, 96GB, or more in capacity.
And I was excited about possible use of multiple signal levels per cell.. Seriously though, how's the state of analog or multi-level memory? With rather weak data preservation guarantees, of course. Could get to breakthrough performance per watt for AI and simulation applications, and more attainable than quantum computing.
This seems like a marketing spin when they failed to make 32 Gigabit modules viable. It's the same thing as when AMD couldn't make 4 core CPUs reliably and sold a bunch as 3 core CPUs. "50% more cores!" "Sometimes you can unlock the 4th core!". Not that people who get a good deal and upgrade their system with these half-steps should be unhappy.
Does anyone with more knowledge of DDR5 memory know if 32 Gb modules are in use and if they are reliable and only double the cost of 16 Gb modules?
>This seems like a marketing spin when they failed to make 32 Gigabit modules viable.
No. It is exactly as the article have stated, Instead of of offering Servers with 128GB of Memory, they now have the option to offer 96GB. This also give Cloud providers ( Or simple VPS providers ) to mix and match their offering with CPU to RAM ratio inside a single unit.
Or imagine, of you need more than 2TB of Memory, instead of jumping straight to 4TB, you now have the option of 3TB of Memory.
For hyperscalers like Amazon, this extra options could easily save them hundreds of millions if not billions.
AFAIK 32 Gb dies would be too large to fit in the standard DDR5 package and thus they aren't made. Also presumably yield of 32 Gb dies would be lower than 24 Gb, although maybe not much due to sparing.
This is the kind of thing that will start happening as the price of transistors on ICs start to rise in response to the end of Moore's law.
A recent episode of Asianometry - The 3-D Transistor Transition[1] pointed out that gate prices hit their minimum with 28 nanometer processes,(13:17 in the video) and have now begun to rise.
So in the future, you'll pay more per gate, but they continue to be smaller and faster and lower power, so for the most part it's worth it.
As for non-binary RAM sizes, we'll have to get used to it.
I just wish they would work on making the memory we have more reliable, so that RowHammer and other attacks just wouldn't be able to exploit their flaws. It would be nice if ECC were standard everywhere.
I can't imagine a commercial environment where this really matters.
Sub-optimal ram install layout makes minimal difference in performance one you get past "enough" for your workload - for a huge percentage of corporate workloads - and for those where it does matter, budgets tend to allow for it.
We all know RAM costs will usually be marginal compared to software costs, whether that's licensing, development, implementation, or all three!
Maybe, in very cost conscious smaller envs this might allow for slightly more options under the "buy the smallest number of the largest size dimms that we can, so we upgrade later without taking anything out" purchasing pattern.
But my experience of such envs is that they're cashflow sensitive, and they're actually ok with you filling a machine with smaller, cheaper dimms then rip and replacing to upgrade, even if the original ram still depreciating, because that cost comes out of a different months budget - especially if it's cheaper up front.
You do need to be upfront that that's the tradeoff decision you're making - but it'll be even less of an issue if you've got a way to reuse the replaced ram, even if it's of minimal benefit ( any use > wasted ).
For home users tho... um, nope, 8-32gb seem to cover 99% of users and probably even 50-80% of HN readers.
Gotta be honest tho, was expecting this to be about gender fluid memory (!?) or perhaps someone finding a way to efficiently encode more than 0/1 for better density.. so it was definitely a let down of an article.
Plenty of 12GByte laptops around, usually with 4 GByte soldered and 8 GByte added. Usually one would do that with two or three modules, it's just the split bus nature od ddr5 that makes for slightly weird packaging.
Actually my phone has 12GByte now that I think about it. It's more of a packaging and mechanical footprint thing than anything else.
Doesn't matter above four modules ofc, the server world will see very little of this.
How does this work on a signaling level? The great advantage of doubling DRAM is that there are no invalid addresses. Every possible pointer you can generate is a word you can address. Is this invariant now horribly broken? E.g. a memory controller can request R/W to addresses that don't exist?
There are plenty of "invalid addresses" in DDR5 before this. All existing DDR5 modules regardless of size use the same command and addressing scheme, which allows for 3-bit chip id, 3-bit bank group id, 2-bit bank id, 16-bit row address and 7-bit column address. [0] This, combined with the 8-bit minimum burst (and ability to only address at 8-bit granularity), means that every single normal x64 DDR5 module has an "address space" of 256GB. Any modules smaller than that do not implement all of the features of the addressing system and the memory controller has to query them beforehand to find out which parts are implemented, and so which addresses are safe to use. (Different modules might use different parts of the system -- as in, modules of the same capacity implemented with older manufacturing tech might use more chipid bits and less row address bits or bank groups.) In no way is every possible pointer a word you can address.
[0]: and one as of yet unimplemented additional reserved bit for either row address or chip id that doubles capacity if it's ever used.
For years I noticed my RAM vendors have been defining a kilobyte as 1000 bytes, and megabytes and gigabytes as multiples of that and I assume a 1 gigabyte chip will have 4.6% less cells than maximum. Dont have an intuitive idea why 33% less cells was not viable till now.
The switch up of terminology annoys me still. Computers don't operate on decimal. Their hardware is built as such. It just seems like a way to mislead customers into buying less data than they think they're getting.
[+] [-] gaudat|3 years ago|reply
[+] [-] ansgri|3 years ago|reply
[+] [-] ClassyJacket|3 years ago|reply
[+] [-] tinus_hn|3 years ago|reply
[+] [-] nsteel|3 years ago|reply
[+] [-] teaearlgraycold|3 years ago|reply
Does anyone with more knowledge of DDR5 memory know if 32 Gb modules are in use and if they are reliable and only double the cost of 16 Gb modules?
[+] [-] ksec|3 years ago|reply
No. It is exactly as the article have stated, Instead of of offering Servers with 128GB of Memory, they now have the option to offer 96GB. This also give Cloud providers ( Or simple VPS providers ) to mix and match their offering with CPU to RAM ratio inside a single unit.
Or imagine, of you need more than 2TB of Memory, instead of jumping straight to 4TB, you now have the option of 3TB of Memory.
For hyperscalers like Amazon, this extra options could easily save them hundreds of millions if not billions.
[+] [-] wmf|3 years ago|reply
[+] [-] mikewarot|3 years ago|reply
A recent episode of Asianometry - The 3-D Transistor Transition[1] pointed out that gate prices hit their minimum with 28 nanometer processes,(13:17 in the video) and have now begun to rise.
So in the future, you'll pay more per gate, but they continue to be smaller and faster and lower power, so for the most part it's worth it.
As for non-binary RAM sizes, we'll have to get used to it.
I just wish they would work on making the memory we have more reliable, so that RowHammer and other attacks just wouldn't be able to exploit their flaws. It would be nice if ECC were standard everywhere.
[1] https://www.youtube.com/watch?v=i3dDslo9ibw
[+] [-] TristanBall|3 years ago|reply
Sub-optimal ram install layout makes minimal difference in performance one you get past "enough" for your workload - for a huge percentage of corporate workloads - and for those where it does matter, budgets tend to allow for it.
We all know RAM costs will usually be marginal compared to software costs, whether that's licensing, development, implementation, or all three!
Maybe, in very cost conscious smaller envs this might allow for slightly more options under the "buy the smallest number of the largest size dimms that we can, so we upgrade later without taking anything out" purchasing pattern.
But my experience of such envs is that they're cashflow sensitive, and they're actually ok with you filling a machine with smaller, cheaper dimms then rip and replacing to upgrade, even if the original ram still depreciating, because that cost comes out of a different months budget - especially if it's cheaper up front.
You do need to be upfront that that's the tradeoff decision you're making - but it'll be even less of an issue if you've got a way to reuse the replaced ram, even if it's of minimal benefit ( any use > wasted ).
For home users tho... um, nope, 8-32gb seem to cover 99% of users and probably even 50-80% of HN readers.
Gotta be honest tho, was expecting this to be about gender fluid memory (!?) or perhaps someone finding a way to efficiently encode more than 0/1 for better density.. so it was definitely a let down of an article.
[+] [-] hengheng|3 years ago|reply
Actually my phone has 12GByte now that I think about it. It's more of a packaging and mechanical footprint thing than anything else.
Doesn't matter above four modules ofc, the server world will see very little of this.
[+] [-] hulitu|3 years ago|reply
... until they update their browser.
[+] [-] nzach|3 years ago|reply
[0] - https://youtu.be/-EhDlXx3okU?t=1694
[+] [-] deafpolygon|3 years ago|reply
[+] [-] rowanG077|3 years ago|reply
[+] [-] Tuna-Fish|3 years ago|reply
[0]: and one as of yet unimplemented additional reserved bit for either row address or chip id that doubles capacity if it's ever used.
[+] [-] 1attice|3 years ago|reply
[+] [-] up2isomorphism|3 years ago|reply
[+] [-] microtherion|3 years ago|reply
[deleted]
[+] [-] wendyshu|3 years ago|reply
[deleted]
[+] [-] galenguyer|3 years ago|reply
[+] [-] aitchnyu|3 years ago|reply
[+] [-] nrclark|3 years ago|reply
[+] [-] seanw444|3 years ago|reply