FAA and NTSB have had to deal with this for decades. Aviation safety has steadily improved, because they were, generally, forward-looking. Not so much "who do we blame?" but "what do we do to avoid this in the future?"
That approach is fine for transportation, but not for nukes, and AI enhances robotics is closer to nukes, in many ways, even though robotics in general runs the gamut all the way down from replicants and Skynet to toaster ovens and juiceros.
It's okay if some toaster ovens burn down a few houses, and then get recalled due to a defective design, right?
But if you have a massive flotilla of self driving cars that go ape shit, flipping over, slamming themselves into trees and catching fire, and suddenly kill 10 million people overnight, around the world, due to an automatic hot fix maliciously kspliced into running kernels by an advanced actor, targeting systems are in motion in particular, that's more than a little terrible. Note that the actor could be a synthetic entity or not.
So, if you think about trouble the with autonomous systems being turing complete, engaging in tasks of arbitrary complexity, it doesn't seem that we can expect a bottom to the worst pitfall imaginable.
AI is unlike aviation, in the sense that we can't trust a simple axiom like: "what goes up, must come down."
We need to be pessimistic. Sometimes fear is healthy. Caveman logic sometimes am good. Fire hot!
collectivized|6 years ago
It's okay if some toaster ovens burn down a few houses, and then get recalled due to a defective design, right?
But if you have a massive flotilla of self driving cars that go ape shit, flipping over, slamming themselves into trees and catching fire, and suddenly kill 10 million people overnight, around the world, due to an automatic hot fix maliciously kspliced into running kernels by an advanced actor, targeting systems are in motion in particular, that's more than a little terrible. Note that the actor could be a synthetic entity or not.
So, if you think about trouble the with autonomous systems being turing complete, engaging in tasks of arbitrary complexity, it doesn't seem that we can expect a bottom to the worst pitfall imaginable.
AI is unlike aviation, in the sense that we can't trust a simple axiom like: "what goes up, must come down."
We need to be pessimistic. Sometimes fear is healthy. Caveman logic sometimes am good. Fire hot!