top | item 45190498

(no title)

carom | 5 months ago

Catastrophic AI risk is such a larp. The systems are not sentient. The risk will always be around the human driving the LLM, not the LLM itself. We already have laws governing human behavior, company behavior. If an entity violates a law using an LLM, it has nothing to do with the LLM.

discuss

order

computerphage|5 months ago

Why do you think systems need to be sentient to be risky?

jwilber|5 months ago

OP isn’t talking about systems at large, but specifically about LLMs and the pervasive idea that they will turn agi and go rogue. Pretty clear context given the thread and their comment.