top | item 45192072

(no title)

jwilber | 5 months ago

OP isn’t talking about systems at large, but specifically about LLMs and the pervasive idea that they will turn agi and go rogue. Pretty clear context given the thread and their comment.

discuss

order

computerphage|5 months ago

I understood that from the context, but my question stands. I'm asking why OP thinks that sentience is necessary for risk in AI