top | item 39438598

(no title)

MertsA | 2 years ago

But if it's a consistent fault, like the silent data corruption covered in the linked paper, redoing the computation is still going to end up with no way to identify which core is faulty. If it's an intermittent fault, then even for hard realtime you can accomplish that with one core, just compute 3x and go with majority result.

discuss

order

vlovich123|2 years ago

Yup exactly. The only way independent hardware can help is if the fault is state dependent in a way on the hardware (eg differences in behavior due to thermal load or different internal state corruption or something) in which case repeated computations may not help if the repeated computation is not sufficiently decoupled temporally to get rid of that state. The other thing with independent hardware is that you don’t pay a 3x performance penalty (instead 3x cost penalty). That being said, none of these fault modes are what are really what is being discussed in the paper.

The other one that freaks me out is miscompilation by the compiler and JITs in the data path of an application. Like we’re using these machines to process hundreds of millions of transactions and trillions of dollars - how much are these silent mistakes costing us?

paganel|2 years ago

I think that strictly looking at it in terms of money-related operations stuff can still be managed/double-checked externally, i.e. by the real world, which means that whatever mistakes/inconsistencies might show up there's still a "hard reality" out there that will start screaming "hey! this money figure is not correct!" because people tend to notice when there are big money-discrepancies and the "mistakes" are, generally speaking, reversible when it comes to money.

What's worrying is when systems like these get used in real-time life-and-death situations, and there's basically no reversibility because that would imply dead people returning to life. For example the code used for stuff like outer space exploration, sure that right now we can add lots and lots of redundancies and check-ups in the software being used in that domain because the money is there to be spent and we still don't have that many people out there in space. But what will happen when we'll think of hosting hundreds, even thousands of people inside a big orbital station? How will we be able to make sure that all the safety-related code for that very big structure (certainly much bigger than we have now in space) doesn't cause the whole thing to go kaboom based on an unknown-unknown software error?

And leaving aside scenarios that are not there yet, right now we've started using software more and more when it comes to warfare (for example for battle simulations based on which real-life decisions are taken), what will happen to the lives of soldiers whose conduct in war has been lead by faulty software?

jorticka|2 years ago

If it's consistent and persistent, wouldn't that classify as broken hardware requiring device change?

Even with 3 chips, if one is permanently wrong you are then left with only 2 working ones so no redundancy is left for further degradation.

> just compute 3x

That might be difficult if CPU is broken. How are you sure you actually computed 3 times if you can't trust the logic.

MertsA|2 years ago

>wouldn't that classify as broken hardware requiring device change?

Yes but you need to catch it first to know what to take out of production.

>That might be difficult if CPU is broken. How are you sure you actually computed 3 times if you can't trust the logic.

That's kind of my point. Either it's a heisen-bug and you never see those results again when you repeat the original program or it's permanently broken and you need to swap out the sketchy CPU. If you only care about the first case then you only need one core. If you care about the second case then you need 3 if you want to come up with an accurate result instead of just determining that one of them is faulty. It's like that old adage about clocks on ships. Either take one clock or take three, never two.