(no title)
yousif_123123 | 1 month ago
I am calling for some care to go in your product to try to reduce the occurrence of these bad outcomes. I just don't think it would be hard for them to detect that a conversation has reached a point that its becoming very likely the user is becoming delusional or may engage in dangerous behavior.
How will we handle AGI if we ever create it, if we can't protect our society from these basic LLM problems?
sendes|1 month ago
Talking to AI might be the very thing that keeps those tendencies below the threshold of dangerous. Simply flagging long conversations would not be a way to deal with these problems, but AI learning how to talk to such users may be.
tinfoilhatter|1 month ago
Do you really think Sam or any of the other sociopaths running these AI companies care whether their product is causing harm to people? I surely do not.
[1] https://siepr.stanford.edu/news/what-point-do-we-decide-ais-...