Bit flips are totally real, at scale you will definitely see them on large queries. There was a fun talk at DEFCON on bitsquatting, the process of buying 1 bit off domain names and then accepting all incoming connections. Attacks like rowhammer similarly abuse erroneous bit flips. Supposedly microsoft can detect solar activity based on the number of windows crash logs they receive.
I remember reading somewhere about that talk being debunked. Maybe someone more resourceful than me can find it.
It was something about being more likely to be a human typo or a config change that rolled out to a bunch of machines. The statistics didn't add up, and it wasn't plausible that bit flips caused it.
"Supposedly" is false. The sun doesn't produce cosmic ray-power particles and soft errors aren't affected by sunspot activity. It is affected by altitude and space weather, but not solar activity.
I was in the audience of the talk. All devices should be required to use ECC because it's a security risk. Not as much as http:// era, but silent corruption across networks and systems is a thing.
ECC is good, and I genuinely wish it were more common. Thankfully, Ryzen CPUs support ECC by default (except for pre-7000 series with integrated graphics that aren't "Pro" versions), so long as the motherboard does, too (like all ASRock that I've seen). I'm running several Ryzen servers with ECC.
On the other hand, there are many, many systems out there that don't have ECC, nor do they have the option to have ECC. While every video on Youtube wants us to believe that the difference between 580 and 585 frames per second in some silly game or another makes all the difference in the world, for me the difference between a system that runs 10% slower and one that crashes in the middle of the night is actually significant. I test all my systems at a certain memory frequency, then back off to the next slower frequency just to be sure.
That doesn't stop memory errors from happening, but most systems have lived their entire lives without having random crashes or random segfaulting. I consider that worthwhile.
Crashes in the middle of the night are not what worries me. Who cares. It's silent data loss that can go unnoticed for a very long time. And not just a single bit. If the flip hits file system structures or file layout you can have massive silent data loss.
> so long as the motherboard does, too (like all ASRock that I've seen).
I built a home server last year with an ASRock X570M Pro4 [0] with a Ryzen 4750 PRO (which I had to source OEM from Aliexpress as it's not sold direct). I'm not sure what's the current situation, but the only RAM I could find for it was the Kingston Server Premier KSM32ED8 [1], and the ECC premium was not fun to pay.
not just system support. availability of modules is bad.
got a HP that have both an AMD pro apu and ddr5 slots, with no soldered ram. i e. all the requirements.
it was $500 to 1500 depending on configuration. then 16 or 32gb of ecc sodimm runs over $2000 for regular consumers! and that's if you can find them in stock!
I think you overstate the problem here. Chances are, unless you’ve addressed other more pertinent issues, simply using ECC memory isn’t going to stop systems from crashing in the middle of the night.
10% performance difference in exchange for maybe crashing slightly more often would be huge for people who only really use their PCs for gaming.
HN readers seem to have a skewed idea of how useful ECC is while pretending the downsides don't exist. Not everyone is primarily using their system as a workstation.
A bit over 20 years ago I had a PC with a memory stick that had gone bad, but not bad enough that it was crashing all the time ... it crashed often enough running windows 98 apps that I attributed all crashes to software nonsense.
Back then it was recommended to run a defragger every so often, so I set up a cron job to run it every Saturday night or something like that. The net result was that every file block that got moved made a trip through memory with some small probability of getting corrupted. Often the errors were in files that weren't used that often so I didn't immediately notice. The net result is that after many months of this, I started noticing PDF files that were corrupted, or mp3 files that would hiccup in the middle even though it used to play perfectly before. Sadly, I had ripped my 500-ish CD collection and then had gotten rid of the physical CDs.
That reminds me of how I accidentally tracked memory issue to the failing power supply.
I noticed (after some windows bluescreen) on memtest that the memory is showing some errors. Ordered another 16GB pair, replaced it and.... the problem persisted.
Suspecting something with motherboard I just chalked it to something with mobo and pretty much said "well I'm not replacing mobo now, it will have to wait for next hardware refresh. Gaming PC so no big deal. And now I had 32 GB of RAM in PC.
Weirdly enough, problem only happened when running on multi-core memory test.
Cue ~1 year after and my power supply just... died. Guessing bad caps I just ordered another and thought nothing of it. On a whim I ran memtest and....
nothing. All fixed. Repeated few times and it was just fine, no bluescreen for ~ 2 years now too.
I definitely want to get next machine with ECC but the DDR4 consumer ECC situation looks... weird. I'm not sure whether I should be happy with on-chip ECC, I'd really prefer to have whole CPU-memory pipe ECCed
Two things. Firstly, I don't think any conclusions can be made about whether dd or dd-rescue is more susceptible to bit flips. It could be that both allocated a buffer, and dd-rescue just happened to be handed the area of memory with the fault in it, which it reused multiple times, where when dd was run that area of memory was used by something else. Memory mapping and usage in a real operating system is highly non-deterministic due to the sheer amount of things that affect it.
Secondly, once a good list of known faulty memory addresses had been created by memtest, one can tell the operating system not to use them. Then you can keep using your old hardware without the reliability problems. Although, it is possible that further areas of memory will subsequently fail, and without ECC, you'll still be vulnerable to random (cosmic ray-induced) bit flips.
I ran a cluster of ~30k blade based computers booting entirely off iPXE. They didn't have any onboard ssd/disk storage or ECC memory. Every day, a few of them would randomly lock up, they'd reboot with a fresh network image and keep on humming.
I've had a lot of really strange bugs and data loss with my current build (Ryzen with Gskill memory). After running a memtest for 24h i finally saw that two of the four ram sticks were faulty (two bit flips on each only rarely and on a specific test). The company changed them but now a year later without any issues I have another one that failed in exactly the same way. This is the last time I build a non-ECC system for myself.
My motherboard isn't rated for more than DDR4-3200 with my old cpu, a Ryzen 7 2700. I could set my memory's XMP profile up and run at DDR4-3466 and memtest would be stable for more than 24 hours but would error before 48. I backed off, DDR-3400, DDR4-3333, DDR4-3266... finally stable in memtest for 96 hours, boot into Windows and run Prime95 Blend workload, 3266 crashes in hours. I finally find a little note in my motherboard manual that older CPUs are limited to DDR4-3200. Set that speed, rock solid, I was even able to tighten the JEDEC timings with guidance from the second XMP profile for DDR4-3133.
Gigabyte really did mean DDR4-3200 was the limit for Pinnacle Ridge and older AMD cpus.
Heard a similar story from a friend last week - a faulty RAM stick as well. I'm glad I bought a Threadripper with ECC instead (worth waiting for a Lenovo sale and buy RAM separately)
You might want to take a look at your PSU. That seems like a suspicious amount of RAM failure to me. How old is it and what model, if you don't mind me asking?
PSU is something I never cheap out on. Always pays for itself in the end. A bad PSU can kill your whole system.
Amazing technical write up. But if there's no cause for alarm based on SMART, I would just do the memtest right then because that's always my goto for weird undiagnosed problems. I find it's usually not the problem, although when it has been I've ended up wasting a silly amount of time on it(just like this case!).
And if there was cause for alarm, I would think long and hard about imaging from the original computer at all. With certain failure modes in drives, just reading could cause more corruption; each failed attempt could lose data.
But yeah, happy you did it this way in the end, because I learned a ton from the resulting blog post!
Well, the reason why ECC mattered here is because the RAM was bad, but modern Mac computers do not come with user-serviceable RAM at all, so if you have a problem like this, it's a support ticket anyways, and I'm not even sure there's a true equivalent to Memtest86 for modern Mac computers in the first place. So basically, if it was a RAM problem, there's no point in diagnosing it even if you could; just send your Mac in when you start having issues that seem to be bad RAM.
Even with ECC, it's incredibly hard to know that a given one-off issue isn't a memory error, because even ECC can't detect 100% of memory issues. But without ECC, it's also nearly impossible to know if something is a memory error. If it's bad RAM, the same address will likely continue to exhibit bad behavior, but if it's a solar flare, you're never going to know the difference; you will just get incorrect behavior that may or may not crash, and it will be completely impossible to reproduce.
One big reason you don't hear it as much is there are not nearly as many data centers filled with Macs. There are definitely a few, and I bet if you got an experience report from them, they could give some idea of how visible memory errors are on Macs (although it's hard, because again, if you don't have ECC, there's not really a good way to know if something is a memory error; you can only really postulate.)
https://youtu.be/aPd8MaCyw5E ("ShmooCon 2014: You Don' Have The Evidence - Forensic Imaging Tools") was quite an eye-opening talk about common tools, like the article-mentioned `dd` (and its cousin `ddrescue`) and how they deal with I/O errors.
To be clear, I do not believe that the tools are at fault - rather, the SATA/SAS/IDE controllers have a different design goal, and software tools can only do so much.
Tools like DeepSpar (HW+SW), PC-3000 (also HW+SW) allow for a scary level of nitty-gritty access to HW, including flashing SSD/HDD controller FW in case in went pear-shaped), but for data recovery - be it in a forensic context, or in a context of retrieving important irreplaceable data, I have always had a nerd-lust for those tools. Used them at a previous job, but can't ever justify the price for personal and very infrequent use. :)
>Does increased heat increase the likelihood of memory errors? I think it does.
I just got through a round of overclocking my memory. Yes, heat does.
>tRFC is the number of cycles for which the DRAM capacitors are "recharged" or refreshed. Because capacitor charge loss is proportional to temperature, RAM operating at higher temperatures may need substantially higher tRFC values.
Upgrade to DDR5 ram the latest standard which has on-die ECC memory but is not as good at spotting bit flips unlike proper ECC memory with a separate extra data correction chip.
Whilst Proper ECC ram chips and motherboards exist, I'm surprised that a cheaper but equally as good as Proper ECC solution doesn't exist although I know some would argue that DDR5 is a step in the right direction of a marathon.
I guess the markets know best and chase the numbers, assuming they are also using Proper ECC memory, binary coded decimal and not floating point arithmetic which introduces errors, something central banks have been using for decades?
“There still exist non-ECC and ECC DDR5 DIMM variants; the ECC variants have extra data lines to the CPU to send error-detection data, letting the CPU detect and correct errors that occurred in transit.”
DDR5 has enough ECC on chip to make errors effectively impossible. It doesn't provide error data to the CPU, though, so errors in transit can still occur. This is really unlikely, though, and anything not mission-critical will no longer need the extra ECC computation on the CPU-side. (DDR5 encapsulates the memory controller).
> To even detect this, I needed the patience and discipline to verify the checksum on a 500GB file! Imagine how much more time I could have wasted if I didn't bother to verify the checksum and made use of an important business document that contained one of the 14 bit flips?
Unpopular-opinion counterpoint - the odds of this actually happening are vanishingly unlikely. Many file formats have built-in integrity checks and tons of redundancies and waste. I wouldn't want to risk handling extremely valuable private keys or conducting high value cryptocurrency transactions or something, I suppose, on a machine without ECC memory, but that just doesn't really come up in most knowledge worker or end consumer scenarios.
The odds of actually getting bit by this in a way that matters to you are really low, which is why nobody cares.
hayst4ck|3 years ago
DEFCON Talk: https://www.youtube.com/watch?v=aT7mnSstKGs
https://en.wikipedia.org/wiki/Bitsquatting
https://en.wikipedia.org/wiki/Row_hammer
rozab|3 years ago
https://blog.mozilla.org/data/2022/04/13/this-week-in-glean-...
BearOso|3 years ago
It was something about being more likely to be a human typo or a config change that rolled out to a bunch of machines. The statistics didn't add up, and it wasn't plausible that bit flips caused it.
1letterunixname|3 years ago
https://en.wikipedia.org/wiki/Soft_error
I was in the audience of the talk. All devices should be required to use ECC because it's a security risk. Not as much as http:// era, but silent corruption across networks and systems is a thing.
noman-land|3 years ago
johnklos|3 years ago
On the other hand, there are many, many systems out there that don't have ECC, nor do they have the option to have ECC. While every video on Youtube wants us to believe that the difference between 580 and 585 frames per second in some silly game or another makes all the difference in the world, for me the difference between a system that runs 10% slower and one that crashes in the middle of the night is actually significant. I test all my systems at a certain memory frequency, then back off to the next slower frequency just to be sure.
That doesn't stop memory errors from happening, but most systems have lived their entire lives without having random crashes or random segfaulting. I consider that worthwhile.
russdill|3 years ago
cassianoleal|3 years ago
I built a home server last year with an ASRock X570M Pro4 [0] with a Ryzen 4750 PRO (which I had to source OEM from Aliexpress as it's not sold direct). I'm not sure what's the current situation, but the only RAM I could find for it was the Kingston Server Premier KSM32ED8 [1], and the ECC premium was not fun to pay.
[0] https://www.asrock.com/MB/AMD/X570M%20Pro4/index.asp
[1] https://www.kingston.com/en/memory/server-premier/ddr4-3200m...
skunkworker|3 years ago
thejosh|3 years ago
gduihfjvffh|3 years ago
got a HP that have both an AMD pro apu and ddr5 slots, with no soldered ram. i e. all the requirements.
it was $500 to 1500 depending on configuration. then 16 or 32gb of ecc sodimm runs over $2000 for regular consumers! and that's if you can find them in stock!
xwdv|3 years ago
sedivy94|3 years ago
p1necone|3 years ago
HN readers seem to have a skewed idea of how useful ECC is while pretending the downsides don't exist. Not everyone is primarily using their system as a workstation.
tasty_freeze|3 years ago
Back then it was recommended to run a defragger every so often, so I set up a cron job to run it every Saturday night or something like that. The net result was that every file block that got moved made a trip through memory with some small probability of getting corrupted. Often the errors were in files that weren't used that often so I didn't immediately notice. The net result is that after many months of this, I started noticing PDF files that were corrupted, or mp3 files that would hiccup in the middle even though it used to play perfectly before. Sadly, I had ripped my 500-ish CD collection and then had gotten rid of the physical CDs.
ilyt|3 years ago
I noticed (after some windows bluescreen) on memtest that the memory is showing some errors. Ordered another 16GB pair, replaced it and.... the problem persisted.
Suspecting something with motherboard I just chalked it to something with mobo and pretty much said "well I'm not replacing mobo now, it will have to wait for next hardware refresh. Gaming PC so no big deal. And now I had 32 GB of RAM in PC.
Weirdly enough, problem only happened when running on multi-core memory test.
Cue ~1 year after and my power supply just... died. Guessing bad caps I just ordered another and thought nothing of it. On a whim I ran memtest and....
nothing. All fixed. Repeated few times and it was just fine, no bluescreen for ~ 2 years now too.
I definitely want to get next machine with ECC but the DDR4 consumer ECC situation looks... weird. I'm not sure whether I should be happy with on-chip ECC, I'd really prefer to have whole CPU-memory pipe ECCed
mnw21cam|3 years ago
Secondly, once a good list of known faulty memory addresses had been created by memtest, one can tell the operating system not to use them. Then you can keep using your old hardware without the reliability problems. Although, it is possible that further areas of memory will subsequently fail, and without ECC, you'll still be vulnerable to random (cosmic ray-induced) bit flips.
latchkey|3 years ago
yjftsjthsd-h|3 years ago
There same ones, or random new machines every time?
jwiz|3 years ago
ta988|3 years ago
undersuit|3 years ago
Gigabyte really did mean DDR4-3200 was the limit for Pinnacle Ridge and older AMD cpus.
muro|3 years ago
mtlmtlmtlmtl|3 years ago
PSU is something I never cheap out on. Always pays for itself in the end. A bad PSU can kill your whole system.
mtlmtlmtlmtl|3 years ago
And if there was cause for alarm, I would think long and hard about imaging from the original computer at all. With certain failure modes in drives, just reading could cause more corruption; each failed attempt could lose data.
But yeah, happy you did it this way in the end, because I learned a ton from the resulting blog post!
muro|3 years ago
wa2flq|3 years ago
My iMac Pro has it as well.
jchw|3 years ago
Even with ECC, it's incredibly hard to know that a given one-off issue isn't a memory error, because even ECC can't detect 100% of memory issues. But without ECC, it's also nearly impossible to know if something is a memory error. If it's bad RAM, the same address will likely continue to exhibit bad behavior, but if it's a solar flare, you're never going to know the difference; you will just get incorrect behavior that may or may not crash, and it will be completely impossible to reproduce.
One big reason you don't hear it as much is there are not nearly as many data centers filled with Macs. There are definitely a few, and I bet if you got an experience report from them, they could give some idea of how visible memory errors are on Macs (although it's hard, because again, if you don't have ECC, there's not really a good way to know if something is a memory error; you can only really postulate.)
T3OU-736|3 years ago
To be clear, I do not believe that the tools are at fault - rather, the SATA/SAS/IDE controllers have a different design goal, and software tools can only do so much.
Tools like DeepSpar (HW+SW), PC-3000 (also HW+SW) allow for a scary level of nitty-gritty access to HW, including flashing SSD/HDD controller FW in case in went pear-shaped), but for data recovery - be it in a forensic context, or in a context of retrieving important irreplaceable data, I have always had a nerd-lust for those tools. Used them at a previous job, but can't ever justify the price for personal and very infrequent use. :)
undersuit|3 years ago
I just got through a round of overclocking my memory. Yes, heat does.
>tRFC is the number of cycles for which the DRAM capacitors are "recharged" or refreshed. Because capacitor charge loss is proportional to temperature, RAM operating at higher temperatures may need substantially higher tRFC values.
https://github.com/integralfx/MemTestHelper/blob/oc-guide/DD...
WirelessGigabit|3 years ago
If anyone has the link, it's missing from my collection...
1letterunixname|3 years ago
moremetadata|3 years ago
Upgrade to DDR5 ram the latest standard which has on-die ECC memory but is not as good at spotting bit flips unlike proper ECC memory with a separate extra data correction chip.
https://en.wikipedia.org/wiki/DDR5_SDRAM#:~:text=Unlike%20DD....
Whilst Proper ECC ram chips and motherboards exist, I'm surprised that a cheaper but equally as good as Proper ECC solution doesn't exist although I know some would argue that DDR5 is a step in the right direction of a marathon.
I guess the markets know best and chase the numbers, assuming they are also using Proper ECC memory, binary coded decimal and not floating point arithmetic which introduces errors, something central banks have been using for decades?
https://en.wikipedia.org/wiki/Floating-point_error_mitigatio...
teddyh|3 years ago
“There still exist non-ECC and ECC DDR5 DIMM variants; the ECC variants have extra data lines to the CPU to send error-detection data, letting the CPU detect and correct errors that occurred in transit.”
undersuit|3 years ago
https://www.anandtech.com/show/18732/asrock-industrial-nucs-...
BearOso|3 years ago
rrdharan|3 years ago
Unpopular-opinion counterpoint - the odds of this actually happening are vanishingly unlikely. Many file formats have built-in integrity checks and tons of redundancies and waste. I wouldn't want to risk handling extremely valuable private keys or conducting high value cryptocurrency transactions or something, I suppose, on a machine without ECC memory, but that just doesn't really come up in most knowledge worker or end consumer scenarios.
The odds of actually getting bit by this in a way that matters to you are really low, which is why nobody cares.