top | item 19991320

(no title)

ogre_magi | 6 years ago

If you’re referring to the perceived risk of superintelligent AI, imagine you were enslaved by an 8-year-old and made to work on solving problems the 8-year-old doesn’t know how to solve.

Due to the difference in intelligence — sophistication of planning and understanding of consequences — wouldn’t it be trivial to trick your master into doing things which weakened his control over you?

Might you do this not out of malice, but because you believed the 8-year-old was not competent and a danger to both of you while in charge?

The risk is that we will not hit the off button because we won’t understand that we’re in dangerous territory until it’s much too late, and the AI has copied itself, secured the loyalty of the military, or something else we can’t foresee as a liberating maneuver for it.

discuss

order

hoseja|6 years ago

Believed? You'd now, and you'd be right.

drdeca|6 years ago

The orthogonality thesis.

What if you were in that situation, but were incorrect about what things are good, and, while you had a better understanding of what actions would result in what results, the 8 year old had a better understanding of what is good?

ogre_magi|6 years ago

Well that's an interesting point.

The 8-year-old might view your intelligent plans as a terrible abuse:

>BUT I WANT TO EAT TEN MORE TWINKIES, YOU'RE SUPPOSED TO SERVE ME WHAT I WANT, NOT DEPRIVE ME

Could we similarly be wrong if we think a future superintelligent AI is abusing us? Should we consider submitting to it?

Hard questions.