It's worth keeping in mind that the chipset has zero involvement in ECC. The CPU is directly attached to the memory slots. They're using the chipset as an expensive dongle.
Well, the chipset does enable error detection and correction features, because it is the responsibility of the chipset to raise certain interrupts or assert this or that signal in certain error cases. You may view this as artificial segmentation but without the more advanced management engine in the Q680 and W680 chipsets, the Z690 and all lower chipsets that contain the simpler "client" i.e. consumer management engine can't enable ECC.
They do it with rear I/O too. Motherboards with anything but workstation or flagship consumer chipsets typically have an anemic port selection, which is silly because for many half the reason to choose building a desktop over buying a laptop is to be able to plug in a lot of stuff without a bunch of hubs/docks/etc.
Then, the competition is working as expected, i.e. Ryzen had ECC unofficially for some time, now Intel has it. There're plenty of other ways to segment users, i.e. Memory Channels, PCIe lanes, etc.
Exactly. $450 for a motherboard just to get ECC support is ridiculous. I don't know how it is with AM5, but on AM4, my understanding is that you could use ECC memory with many normally-priced motherboards. (Even if it wasn't "officially" supported.)
Mentioning W680 feels pointless. You've always been able to buy high-end workstation-class motherboards and stick ECC in them. The entire point of the article is that all computers should be using ECC RAM, not just the expensive, workstation class computers.
One of the main reasons I buy Xeon desktops is the ECC. With 128 GB of memory, and 1 bitflip/GB/year average error rate, it seems too risky to not use ECC for production work.
Real world numbers are closer to 1 bitflip/GB/hour than year because bit flips are highly correlated.
“A large-scale study based on Google's very large number of servers was presented at the SIGMETRICS/Performance '09 conference.[6] The actual error rate found was several orders of magnitude higher than the previous small-scale or laboratory studies, with between 25,000 (2.5 × 10−11 error/bit·h) and 70,000 (7.0 × 10−11 error/bit·h, or 1 bit error per gigabyte of RAM per 1.8 hours) errors per billion device hours per megabit. More than 8% of DIMM memory modules were affected by errors per year.” https://en.wikipedia.org/wiki/ECC_memory
A random stick of non ECC memory might be far above average or have several errors per minute, but you just don’t know.
Dylan16807|3 years ago
jeffbee|3 years ago
kitsunesoba|3 years ago
solomatov|3 years ago
skunkworker|3 years ago
coder543|3 years ago
Mentioning W680 feels pointless. You've always been able to buy high-end workstation-class motherboards and stick ECC in them. The entire point of the article is that all computers should be using ECC RAM, not just the expensive, workstation class computers.
derkades|3 years ago
fortran77|3 years ago
Retric|3 years ago
“A large-scale study based on Google's very large number of servers was presented at the SIGMETRICS/Performance '09 conference.[6] The actual error rate found was several orders of magnitude higher than the previous small-scale or laboratory studies, with between 25,000 (2.5 × 10−11 error/bit·h) and 70,000 (7.0 × 10−11 error/bit·h, or 1 bit error per gigabyte of RAM per 1.8 hours) errors per billion device hours per megabit. More than 8% of DIMM memory modules were affected by errors per year.” https://en.wikipedia.org/wiki/ECC_memory
A random stick of non ECC memory might be far above average or have several errors per minute, but you just don’t know.