top | item 45729047

(no title)

Fade_Dance | 4 months ago

>what should instead happen is the AI try to guide them towards making their lives less shit

There aren't enough guardrails in place for LLMs to safely interact with suicidal people who are possibly an inch from taking their own life.

Severely suicidal/clinically depressed people are beyond looking to improve their lives. They are looking to die. Even worse, and what people who haven't been there can't fully understand is the severe inversion that happens after months of warped reality and extreme pain, where hope and happiness greatly amplify the suicidal thoughts and can make the situation far more dangerous. It's hard to explain, and is a unique emotional space. Almost a physical effect, like colors drain from the world and reality inverts in many dimensions.

It's really a job for a human professional and will be for a while yet.

Agree that "shut down and refer to hotline" doesn't seem effective. But it does reduce liability, which is likely the primary objective...

Refer-to-human directly seems like it would be far more effective, or at least make it easy to get into a chat with a professional (yes/no) prompt, with the chat continuing after a handoff. It would take a lot of resources though. As it stands, most of this happens in silence and very few do something like call a phone number.

discuss

order

jalapenos|4 months ago

Guess how I know you're wrong on the "beyond" bit.

The point is you don't get to intervene until they let you. And they've instead decided on the safer feeling conversation with the LLM - fuck what best practice says. So the LLM better get it right.

derektank|4 months ago

I could be mistaken, but my understanding was that the people most likely to interact with the suicidal or near suicidal (i.e. 988 suicide hotline attendants) aren't actually mental health professionals, most of them are volunteers. The script they run through is fairly rote and by the numbers (the Question, Persuade, Refer framework). Ultimately, of course, a successful intervention will result in people seeing a professional for long term support and recovery, but preventing a suicide and directing someone to that provider seems well within the capabilities of an LLM like ChatGPT or Claude