top | item 47006292

(no title)

pjm331 | 16 days ago

The sci fi version of the alignment problem is about AI agents having their own motives

The real world alignment problem is humans using AI to do bad stuff

The latter problem is very real

discuss

order

zardo|16 days ago

> The sci fi version of the alignment problem is about AI agents having their own motives

The sci-fi version is alignment (not intrinsic motivation) though. Hal 9000 doesn't turn on the crew because it has intrinsic motivation, it turns on the crew because of how the secret instruction the AI expert didn't know about interacts with the others.