top | item 12508450

(no title)

csbrooks | 9 years ago

https://wiki.lesswrong.com/wiki/Paperclip_maximizer

"...a paperclip maximizer is an artificial general intelligence (AGI) whose goal is to maximize the number of paperclips in its collection... It would innovate better and better techniques to maximize the number of paperclips. At some point, it might convert most of the matter in the solar system into paperclips."

Also:

"Some goals apparently serve as a proxy or measure of human welfare, so that maximizing towards these goals seems to also lead to benefit for humanity. Yet even these would produce similar outcomes unless the full complement of human values is the goal. For example, an AGI whose terminal value is to increase the number of smiles, as a proxy for human happiness, could work towards that goal by reconfiguring all human faces to produce smiles, or tiling the solar system with smiley faces (Yudkowsky 2008)."

discuss

order

M_Grey|9 years ago

"The AI does not hate you, nor does it love you, but you are made out of atoms which it can use for something else."(Yudkowsky)

inputcoffee|9 years ago

Thanks for this clear and direct response. I see how it responds, quite directly, to the first section. I thought I was attempting to respond to this critique in the last section, but it was not enough.

soared|9 years ago

Obviously we all know the paperclip example, you can't just quote it and call that good. Countless people have made objections to this..

csbrooks|9 years ago

To me, it seems to refute the argument in the article: that AI has to have some kind of "will" in order to be dangerous. Because AI doesn't "desire" to take over the world, it can't harm us. I believe The paperclip example shows that isn't the case.