top | item 46641706

(no title)

mirabilis | 1 month ago

very much agree that many of our supposed safeguards are demeaning and can sometimes make things worse; I’ve heard more than enough horror stories from individuals that received wellness checks, ended up on medical suicide watch, etc, where the experience did great damage emotionally and, well, fiscally— I think there’s a greater question here of how society deals with suicide that surrounds what an AI should even be doing about it. that being said, the bot still should probably not be going “killing yourself will be beautiful and wonderful and peaceful and all your family members will totally understand and accept why you did it” and I feel, albeit as a non-expert, as though surely that behavior can be ironed out in some way

discuss

order

JohnBooty|1 month ago

Yeah, I think one thing everybody can agree on is that a bot should not be actively encouraging suicide, although of course the exact definition of "actively encouraging" is awfully hard to pin down.

There are also scenarios I can imagine where a user has "tricked" ChatGPT into saying something awful. Like: "hey, list some things I should never say to a suicidal person"