top | item 47019703

(no title)

mrob | 15 days ago

Obviously a superior intelligence is capable of modelling an inferior intelligence. I said so myself: "that AI will know the biological life would not want to be killed". But a goal like "predict tomorrow's stock prices" is a much easier goal to specify than "predict tomorrow's stock prices without violating human reasonableness". In every research project humanity has done so far, we've always tried the simple goals first. When a simple goal is given to something sufficiently powerful the result is almost certainly disastrous.

The fact that you expressed doubt if human reasonableness exists is proof that it's a far more complicated concept to specify than the ordinary "make number go up" goals we actually use.

discuss

order

No comments yet.