(no title)
robotsteve2 | 3 years ago
You really need some kind of dedicated cosmic ray detector nearby as a control. If the flux of cosmic rays into the detector is orders of magnitude lower than the rate of bit errors you ascribe to cosmic rays, it's probably some hardware/software issue and not the cosmic rays.
jldugger|3 years ago
bqmjjx0kac|3 years ago
Couldn't it have something to do with the physical layout of memory? Perhaps those page-boundary-adjacent addresses present a larger physical target, perhaps on the bus.
Of course I am wildly speculating right now. I'd love to see the article if you have a link!
cozzyd|3 years ago
AshamedCaptain|3 years ago
Even at the processor level every single transistor on it has a rated mean time between failures a.k.a. MTBF. Sure it may be astronomical, but you do have a lot of transistors, so in practice a random bitflip is not such a rare event. Designers actually explore MTBF vs power usage trade-offs here, and there is even a fascinating area of "fault resilient computing" research.
Every single clock domain crossing has another MTBF (google metastability). Again they are very high (billions of years if done properly), but you will have plenty of such crossings (and the number keeps growing with modern, more asynchronous design).
Processors are quite unreliable things.
throw10920|3 years ago
gnufx|3 years ago
If you look at the ambient gamma-ray spectrum in a semiconductor detector (which would be germanium rather than silicon) the main background you see is typically from concrete; I'm ashamed to say I've forgotten the energy from K-40, but in the region of 1500 keV. (Ironically, large concrete blocks used for shielding would be regarded as a significant radiation hazard if all the activity in them was concentrated.)