top | item 35927771

(no title)

OMGWTF | 2 years ago

Conscious AI primarily would be a problem because we don't really have clear definitions and understandings of what we mean by that, and it leads to endless discussions.

The real problem is not whether an AI really "experiences" what happens to it or what it does, it's not even whether or not it gains "free will" (if that even exists).

The real problems start when AI gains motives/objectives and means to realize them. Or means to expand its means. I would find a completely stupid system that has the goal and ability to turn any matter into paperclips and more matter converters more scary than most visions of AI.

discuss

order

goatlover|2 years ago

Or as George Hinton puts it, when you give AIs the ability to create their own subgoals to accomplish some goal, they're likely to quickly realize that having more control will help them accomplish their goal.