top | item 47008715

(no title)

fghorow | 16 days ago

Yes. ChatGPT "safely" helped[1] my friend's daughter write a suicide note.

[1] https://www.nytimes.com/2025/08/18/opinion/chat-gpt-mental-h...

discuss

order

overgard|16 days ago

I have mixed feelings on this (besides obviously being sad about the loss of a good person). I think one of the useful things about AI chat is that you can talk about things that are difficult to talk to another human about, whether it's an embarrassing question or just things you don't want people to know about you. So it strikes me that trying to add a guard rail for all the things that reflect poorly on a chat agent seems like it'd reduce the utility of it. I think people have trouble talking about suicidal thoughts to real therapists because AFAIK therapists have a duty to report self harm, which makes people less likely to talk about it. One thing that I think is dangerous with the current LLM models though is the sycophancy problem. Like, all the time chatGPT is like "Great question!". Honestly, most my questions are not "great", nor are my insights "sharp", but flattery will get you a lot of places.. I just worry that these things attempting to be agreeable lets people walk down paths where a human would be like "ok, no"

FireBeyond|16 days ago

> One thing that I think is dangerous with the current LLM models though is the sycophancy problem. Like, all the time chatGPT is like "Great question!"

100%

In ChatGPT I have the Basic Style and Tone set to "Efficient: concise and plain". For Characteristics I've set:

- Warm: less

- Enthusiastic: less

- Headers and lists: default

- Emoji: less

And custom instructions:

> Minimize sycophancy. Do not congratulate or praise me in any response. Minimize, though not eliminate, the use of em dashes and over-use of “marketing speak”.

magicalhippo|16 days ago

> Like, all the time chatGPT is like "Great question!".

I've been trying out Gemini for a little while, and quickly got annoyed by that pattern. They're overly trained to agree maximally.

However, in the Gemini web app you can add instructions that are inserted in each conversation. I've added that it shouldn't assume my suggestions as good per default, but offer critique where appropriate.

And so every now and then it adds a critique section, where it states why it thinks what I'm suggesting is a really bad idea or similar.

It's overall doing a good job, and I feel it's something it should have had by default in a similar fashion.

zer00eyz|16 days ago

Do I feel bad for the above person.

I do. Deeply.

But having lived through the 80's and 90's, the satanic panic I gotta say this is dangerous ground to tread. If this was a forum user, rather than a LLM, who had done all the same things, and not reached out, it would have been a tragedy but the story would just have been one among many.

The only reason we're talking about this is because anything related to AI gets eyeballs right now. And our youth suicides epidemic outweighs other issues that get lots more attention and money at the moment.

lbeckman314|16 days ago

https://archive.is/fuJCe

(Apologies if this archive link isn't helpful, the unlocked_article_code in the URL still resulted in a paywall on my side...)

fghorow|16 days ago

Thank you. And shame on the NYT.

LeoPanthera|16 days ago

We probably shouldn't be using the "archive" site that hijacks your browser into DDOSing other people. I'm actually surprised HN hasn't banned it.

NedF|16 days ago

[deleted]

OutOfHere|16 days ago

[deleted]

plorg|16 days ago

You surely understand that this is not what GP is describing.

wiseowise|16 days ago

[deleted]

fghorow|16 days ago

May you never need to be in a bereaved parent's shoes.

optimalsolver|16 days ago

[deleted]

andrewflnr|16 days ago

They're in an impossible situation they created themselves and inflict on the rest of us. Forgive us if we don't shed any tears for them.

sumeno|16 days ago

The leaders of these LLM companies should be held criminally liable for their products in the same way that regular people would be if they did the same thing. We've got to stop throwing up our hands and shrugging when giant corporations are evil