More RAM is always nice but I'm secretly hoping we'll start to see more ECC support in the future. With these humongous modules and even with a teeny tiny bitflip probability corruption chance becomes non insignificant.
Oh yes, I understand that. I only wish that ECC support in general starts getting more traction in consumer electronics. Nowadays (unless you go to super noisy super expensive server hardware) maybe with an AMD processor maybe a motherboard manufacturer will have a 20-links-deep document that says that these ECC modules may be supported, proceed at your own risk, might set your flat on fire, kill kittens etc. When you had a couple of gigs of ram it was probably irrelevant but if you have multiple TB of RAM caching file access ECC should become normalised.
I think inline ECC (the module performs the ECC) is mandatory with LPDDR4 (the error rates on current silicon are too high to leave it out), but link ECC (between the CPU and the module) is optional.
Note that link ECC + inline ECC don't give you end-to-end protection, since the controller in the memory module can still flip bits. DDR5 is moving to on-die ECC (which, unlike DDR <= 4's side-band ECC) also isn't end-to-end.
I'd like to see side-band ECC continue to exist, but I think it is going to be phased out entirely.
This article defines all the terms, but is very vague about what things are mandatory, or how reliable the error correction schemes are. For instance, it carefully doesn't say that SECDED schemes detect all two bit errors, instead it says they detect at least some:
> I'd like to see side-band ECC continue to exist, but I think it is going to be phased out entirely.
I doubt it will be phased out for servers. I haven't seen anyone reporting that on-die ECC in DDR5 has a reporting mechanism, and reporting on ram errors is important for server reliability.
I really wish we’d just get in-band ECC on normal consumer platforms. That way we’d need no special DIMMs, in applications where ECC was desired it could be enabled and the capacity penalty would be paid, in other applications it could be disabled and no capacity would be lost.
I like this idea. 64gb of ram non-ecc, 48gb in ecc. Dynamic, succinct, and enables more supply chain cross over for not having two (three?) separate DIMM types.
On AMD ECC support is pretty much standard on every chip they make, and always has been. Even my shitty 4-core phenom from over ten years ago on an el-cheapo motherboard supported swapping it's regular DIMMs for ECC ones. You're never going to get ECC "for free" but it would be totally possible for everyone to pay the cost once and just move to ECC-only for everything from now on.
Except intel, the company that brought software-locked hardware features to x86, love to price-differentiate.
Having physical memory segments be different logical sizes at runtime depending on the ECC setting does not sound fun.
Having your system’s available memory fluctuate up and down based on how many segments are currently set to ECC also doesn’t sound fun.
Having developers manually turn ECC off for regions where it’s unimportant sounds like a lot of complexity for a relatively rare use case.
There is in-band ECC in some newer Intel designs, but it’s all or nothing. Adding extremely complexity to memory management to selectively disable it sounds like a lot to ask.
It does, but this particular implementation is local to the module, and cannot be used for secondary purposes in addition to error correction, such as storing tag bits.
Runtime asserts and invariant checks in software can also help a lot with isolating bitflip errors. With a nice addition of also isolating effects of software bugs.
I don't know if it is significant. Runtime checks tend to focus on small but critical part of the data, like size fields. It usually doesn't check bulk data, like decompressed image data, or code, and it also may not be effective if data is in cache. Furthermore, it will only detect errors, not correct them. Also the performance cost is, I think, much higher than the extra RAM chip. Good coding practice for critical path in software, but clearly, it doesn't substitute for dedicated hardware.
I have had defective RAM, and I got quite a bit of corruption before the first crashes, it is hardly noticeable when it is just a pixel changing color in a picture, but it is still something you don't want. ECC would have prevented that.
I know there is software resistant on random bitflips, like for satellites exposed to cosmic rays, but it is a highly specialized field. It is also a field where they use special chips, typically with coarser (and therefore less efficient) dies that are more resistant to radiation. You leave a lot on the table for that.
ECC is better handled in hardware: most of the time it won’t happen, and the hardware can more easily interrupt the processor so the kernel can correct the problem or signal a fault if it’s not a correctable corruption.
Those only help isolate somewhat predictable errors. Which is rare for what ECC is designed to protect against.
If it’s a random, once in several billion reads/writes issue, it can just stop/identify the bad data from further propagating. Sometimes. That data is still lost.
ECC does forward error correction, which is extremely rare for the type of data protection you’re talking about. and if the data is corrupted in RAM (say when initially loaded/read) before the software can apply FEC, there is nothing the software can do.
I thought that the current wave of compiler correctness checking, zero-cost abstractions, JIT compilers and speculative processor behaviour were all about removing those "unnecessary" runtime asserts and invariant checks to get better performance.
But it does not have a means of reporting ECC triggers to the user from my understanding, which is really one of the most important parts.
When ECC starts tripping on a device outside of completely random times is when you should look into what's going wrong. You may have overheating or failing hardware.
Wikipedia: Unlike DDR4, all DDR5 chips have on-die ECC, where errors are detected and corrected before sending data to the CPU. This, however, is not the same as true ECC memory with extra data correction chips on the memory module.
So I'm not sure how this works, because I'm not sure if "true" ECC is better/worse/same as on-die ECC. A casual googling shows on-die to have more advantages.
Aurornis|2 years ago
ECC modules just have more chips to store the extra parity information. In the high capacity RDIMM server market there are plenty of ECC options.
spystath|2 years ago
unknown|2 years ago
[deleted]
hedora|2 years ago
Note that link ECC + inline ECC don't give you end-to-end protection, since the controller in the memory module can still flip bits. DDR5 is moving to on-die ECC (which, unlike DDR <= 4's side-band ECC) also isn't end-to-end.
I'd like to see side-band ECC continue to exist, but I think it is going to be phased out entirely.
This article defines all the terms, but is very vague about what things are mandatory, or how reliable the error correction schemes are. For instance, it carefully doesn't say that SECDED schemes detect all two bit errors, instead it says they detect at least some:
https://www.synopsys.com/designware-ip/technical-bulletin/er...
toast0|2 years ago
I doubt it will be phased out for servers. I haven't seen anyone reporting that on-die ECC in DDR5 has a reporting mechanism, and reporting on ram errors is important for server reliability.
bpye|2 years ago
gabereiser|2 years ago
jakobson14|2 years ago
On AMD ECC support is pretty much standard on every chip they make, and always has been. Even my shitty 4-core phenom from over ten years ago on an el-cheapo motherboard supported swapping it's regular DIMMs for ECC ones. You're never going to get ECC "for free" but it would be totally possible for everyone to pay the cost once and just move to ECC-only for everything from now on.
Except intel, the company that brought software-locked hardware features to x86, love to price-differentiate.
Aurornis|2 years ago
Having your system’s available memory fluctuate up and down based on how many segments are currently set to ECC also doesn’t sound fun.
Having developers manually turn ECC off for regions where it’s unimportant sounds like a lot of complexity for a relatively rare use case.
There is in-band ECC in some newer Intel designs, but it’s all or nothing. Adding extremely complexity to memory management to selectively disable it sounds like a lot to ask.
hinkley|2 years ago
fweimer|2 years ago
Keyframe|2 years ago
brookst|2 years ago
ls612|2 years ago
undersuit|2 years ago
bheadmaster|2 years ago
GuB-42|2 years ago
I have had defective RAM, and I got quite a bit of corruption before the first crashes, it is hardly noticeable when it is just a pixel changing color in a picture, but it is still something you don't want. ECC would have prevented that.
I know there is software resistant on random bitflips, like for satellites exposed to cosmic rays, but it is a highly specialized field. It is also a field where they use special chips, typically with coarser (and therefore less efficient) dies that are more resistant to radiation. You leave a lot on the table for that.
gumby|2 years ago
lazide|2 years ago
If it’s a random, once in several billion reads/writes issue, it can just stop/identify the bad data from further propagating. Sometimes. That data is still lost.
ECC does forward error correction, which is extremely rare for the type of data protection you’re talking about. and if the data is corrupted in RAM (say when initially loaded/read) before the software can apply FEC, there is nothing the software can do.
ElectricalUnion|2 years ago
mastax|2 years ago
ikekkdcjkfke|2 years ago
pixl97|2 years ago
When ECC starts tripping on a device outside of completely random times is when you should look into what's going wrong. You may have overheating or failing hardware.
drzaiusapelord|2 years ago
So I'm not sure how this works, because I'm not sure if "true" ECC is better/worse/same as on-die ECC. A casual googling shows on-die to have more advantages.