top | item 46503888

(no title)

yousif_123123 | 1 month ago

> Calling for a shutdown of long conversations if they don't fit some pre-defined idea of problem solving is just not going to happen.

I am calling for some care to go in your product to try to reduce the occurrence of these bad outcomes. I just don't think it would be hard for them to detect that a conversation has reached a point that its becoming very likely the user is becoming delusional or may engage in dangerous behavior.

How will we handle AGI if we ever create it, if we can't protect our society from these basic LLM problems?

discuss

order

sendes|1 month ago

>its becoming very likely the user is becoming delusional or may engage in dangerous behavior.

Talking to AI might be the very thing that keeps those tendencies below the threshold of dangerous. Simply flagging long conversations would not be a way to deal with these problems, but AI learning how to talk to such users may be.

tinfoilhatter|1 month ago

In June 2015, Sam Altman told a tech conference, “I think that AI will probably, most likely, sort of lead to the end of the world. But in the meantime, there will be great companies created with serious machine learning.”

Do you really think Sam or any of the other sociopaths running these AI companies care whether their product is causing harm to people? I surely do not.

[1] https://siepr.stanford.edu/news/what-point-do-we-decide-ais-...