top | item 37361597

(no title)

spystath | 2 years ago

More RAM is always nice but I'm secretly hoping we'll start to see more ECC support in the future. With these humongous modules and even with a teeny tiny bitflip probability corruption chance becomes non insignificant.

discuss

order

Aurornis|2 years ago

These are individual memory chips. They can be used to build both ECC modules and non-ECC modules.

ECC modules just have more chips to store the extra parity information. In the high capacity RDIMM server market there are plenty of ECC options.

spystath|2 years ago

Oh yes, I understand that. I only wish that ECC support in general starts getting more traction in consumer electronics. Nowadays (unless you go to super noisy super expensive server hardware) maybe with an AMD processor maybe a motherboard manufacturer will have a 20-links-deep document that says that these ECC modules may be supported, proceed at your own risk, might set your flat on fire, kill kittens etc. When you had a couple of gigs of ram it was probably irrelevant but if you have multiple TB of RAM caching file access ECC should become normalised.

hedora|2 years ago

I think inline ECC (the module performs the ECC) is mandatory with LPDDR4 (the error rates on current silicon are too high to leave it out), but link ECC (between the CPU and the module) is optional.

Note that link ECC + inline ECC don't give you end-to-end protection, since the controller in the memory module can still flip bits. DDR5 is moving to on-die ECC (which, unlike DDR <= 4's side-band ECC) also isn't end-to-end.

I'd like to see side-band ECC continue to exist, but I think it is going to be phased out entirely.

This article defines all the terms, but is very vague about what things are mandatory, or how reliable the error correction schemes are. For instance, it carefully doesn't say that SECDED schemes detect all two bit errors, instead it says they detect at least some:

https://www.synopsys.com/designware-ip/technical-bulletin/er...

toast0|2 years ago

> I'd like to see side-band ECC continue to exist, but I think it is going to be phased out entirely.

I doubt it will be phased out for servers. I haven't seen anyone reporting that on-die ECC in DDR5 has a reporting mechanism, and reporting on ram errors is important for server reliability.

bpye|2 years ago

I really wish we’d just get in-band ECC on normal consumer platforms. That way we’d need no special DIMMs, in applications where ECC was desired it could be enabled and the capacity penalty would be paid, in other applications it could be disabled and no capacity would be lost.

gabereiser|2 years ago

I like this idea. 64gb of ram non-ecc, 48gb in ecc. Dynamic, succinct, and enables more supply chain cross over for not having two (three?) separate DIMM types.

jakobson14|2 years ago

Talk to intel.

On AMD ECC support is pretty much standard on every chip they make, and always has been. Even my shitty 4-core phenom from over ten years ago on an el-cheapo motherboard supported swapping it's regular DIMMs for ECC ones. You're never going to get ECC "for free" but it would be totally possible for everyone to pay the cost once and just move to ECC-only for everything from now on.

Except intel, the company that brought software-locked hardware features to x86, love to price-differentiate.

Aurornis|2 years ago

Having physical memory segments be different logical sizes at runtime depending on the ECC setting does not sound fun.

Having your system’s available memory fluctuate up and down based on how many segments are currently set to ECC also doesn’t sound fun.

Having developers manually turn ECC off for regions where it’s unimportant sounds like a lot of complexity for a relatively rare use case.

There is in-band ECC in some newer Intel designs, but it’s all or nothing. Adding extremely complexity to memory management to selectively disable it sounds like a lot to ask.

hinkley|2 years ago

Doesn’t DDR5 require ECC to function properly? I think we’ve gotten to the point that we need extended error correction as a mark of robustness. E2C2.

fweimer|2 years ago

It does, but this particular implementation is local to the module, and cannot be used for secondary purposes in addition to error correction, such as storing tag bits.

Keyframe|2 years ago

I actually look forward to (promised) future where "disk" storage is fast enough not to need RAM anymore.

brookst|2 years ago

The convergence of volatile and non-volatile storage is one of the most exciting upcoming technologies, and always will be.

ls612|2 years ago

With the failure of Optane I doubt that it will be coming anytime soon.

undersuit|2 years ago

The merging of CXL and NVME is just one frustrated vendor away.

bheadmaster|2 years ago

Runtime asserts and invariant checks in software can also help a lot with isolating bitflip errors. With a nice addition of also isolating effects of software bugs.

GuB-42|2 years ago

I don't know if it is significant. Runtime checks tend to focus on small but critical part of the data, like size fields. It usually doesn't check bulk data, like decompressed image data, or code, and it also may not be effective if data is in cache. Furthermore, it will only detect errors, not correct them. Also the performance cost is, I think, much higher than the extra RAM chip. Good coding practice for critical path in software, but clearly, it doesn't substitute for dedicated hardware.

I have had defective RAM, and I got quite a bit of corruption before the first crashes, it is hardly noticeable when it is just a pixel changing color in a picture, but it is still something you don't want. ECC would have prevented that.

I know there is software resistant on random bitflips, like for satellites exposed to cosmic rays, but it is a highly specialized field. It is also a field where they use special chips, typically with coarser (and therefore less efficient) dies that are more resistant to radiation. You leave a lot on the table for that.

gumby|2 years ago

ECC is better handled in hardware: most of the time it won’t happen, and the hardware can more easily interrupt the processor so the kernel can correct the problem or signal a fault if it’s not a correctable corruption.

lazide|2 years ago

Those only help isolate somewhat predictable errors. Which is rare for what ECC is designed to protect against.

If it’s a random, once in several billion reads/writes issue, it can just stop/identify the bad data from further propagating. Sometimes. That data is still lost.

ECC does forward error correction, which is extremely rare for the type of data protection you’re talking about. and if the data is corrupted in RAM (say when initially loaded/read) before the software can apply FEC, there is nothing the software can do.

ElectricalUnion|2 years ago

I thought that the current wave of compiler correctness checking, zero-cost abstractions, JIT compilers and speculative processor behaviour were all about removing those "unnecessary" runtime asserts and invariant checks to get better performance.

mastax|2 years ago

Assuming the compiler doesn't optimize them out.

ikekkdcjkfke|2 years ago

All ddr5 has ecc

pixl97|2 years ago

But it does not have a means of reporting ECC triggers to the user from my understanding, which is really one of the most important parts.

When ECC starts tripping on a device outside of completely random times is when you should look into what's going wrong. You may have overheating or failing hardware.

drzaiusapelord|2 years ago

Wikipedia: Unlike DDR4, all DDR5 chips have on-die ECC, where errors are detected and corrected before sending data to the CPU. This, however, is not the same as true ECC memory with extra data correction chips on the memory module.

So I'm not sure how this works, because I'm not sure if "true" ECC is better/worse/same as on-die ECC. A casual googling shows on-die to have more advantages.