top | item 46044698

(no title)

DanielVZ | 3 months ago

I do think we need to be hyper focused on this. We do not need more ways for people to be convinced of suicide. This is a huge misalignment of objectives and we do not know what other misalignment issues are already more silently happening or may appear in the future as AI capabilities evolve.

Also we can’t deny the emotional element. Even though it is subjective, knowing that the reason your daughter didn’t seek guidance from you and committed suicide was because a chatbot convinced her of so must be gut wrenching. So far I’ve seen two instances of attempted suicide driven by AI in my small social circle. And it has made me support banning general AI usage at times.

Nowadays I’m not sure if it should or even could be banned, but we DO have to invest significant resources to improve alignment, otherwise we risk that in the future AI does more harm than good.

discuss

order

infecto|3 months ago

Hard question to answer imo but at a high level I would argue that social media for folks under 18 is even more harmful than LLMs.

It is quite fascinating and I hope more studies exist that look into why some folks are more susceptible to this type of manipulation.

rafterydj|3 months ago

Respectfully I disagree there. Social media is dangerous and corrosive to a healthy mind, but AI is like a rapidly adaptive cancer if you don't recognize it for what it is.

Reading accounts from people who fell into psychosis induced by LLMs feels like a real time mythological demon whispering insanities and temptations into the ear directly, in a way that algorithmically recommended posts from other people could never match.

It will naturally mimic your biases. It will find the most likely response for you to keep engaging with it. It will tell you everything you want to hear, even if it is not based in reality. In my mind it's the same dangers of social media but dialed all the way up to 11.

DanielVZ|3 months ago

Oh you are absolutely right. I’m not sure yet if it IS more harmful but it has had time to do so much more harm.

Starting with dumb challenges that risk children and their families life.

And don’t get me started with how algorithms don’t care about the wellbeing of users, so if it’s depressing content that drives engagement, users life is just a tiny sacrifice in favor the companies profits.

iranintoavan|3 months ago

"I would argue that social media for folks under 18 is even more harmful than LLMs."

Well, it turns out all the social media companies are also the LLM companies and they are adding LLMs to social media, so....

joshtbradley|3 months ago

I largely agree with what you’re saying. Certainly alignment should be improved to never encourage suicide.

But I also think we should consider the broader context. Suicide isn’t new, and it’s been on the rise. I’ve suffered from very dark moments myself. It’s a deep, complex issue, inherently tied to technology. But it’s more than that. For me, it was not having an emotionally supportive environment that led to feelings of deep isolation. And it’s very likely that part of why I expanded beyond my container was because I had access to ideas on the internet that my parents never did.

I never consulted AI in these dark moments, I didn’t have the option, and honestly that may have been for the best.

And you might be right. Pointed bans, for certain groups and certain use cases might make sense. But I hear a lot of people calling for a global ban, and that concerns me.

Considering how we improve the broad context, I genuinely see AI as having potential for creating more aware, thoughtful, and supportive people. That’s just based on how I use AI personally, it genuinely helps me refine my character and process trauma. But I had to earn that ability through a lot of suffering and maturing.

I don’t really have a point. Other than admitting my original comment used logical fallacies, but I didn’t intend to diminish the complexity of this conversation. But I did. And it is clearly a very complex issue.

Dilettante_|3 months ago

>I’ve seen two instances of attempted suicide driven by AI in my small social circle

Christ, that's a lot. My heart goes out to you and I understand if you prefer not to answer, but could you tell more about how the AI-aspect played out? How did you find out that AI was involved?

DanielVZ|3 months ago

I was going to write a full answer with all details but at some point it gets too personal so I’ll just answer the questions briefly.

> but could you tell more about how the AI-aspect played out?

So in summary the AI sycophantically agreed with how there was no way out of the situations and how nobody understood their position further isolating them. And when they contemplated suicide it did assist on the method selection with no issues whatsoever.

> How did you find out that AI was involved?

The victims mentioned it and the chat logs are there.

delaminator|3 months ago

Did you know that 5% of all deaths in Canada is by elective suicide?

david-gpu|3 months ago

By elderly people wo are already dying from natural causes and ask for a medically assisted death instead of unnecessarily prolonging their suffering. It is telling that so many people who suffer choose a dignified death once they are legally allowed to.

namibj|3 months ago

One could argue that number should be close to 100%, as people would live to old age where eventually the body is just too worn to continue a good life.

scotty79|3 months ago

On one hand it shows terrible inadequacies of Canadian health care. On the other would it be better to force people to suffer till the natural end of their lives that are terrible because of those inadequacies? Healthcare won't get significantly better soon enough for them anyways. It seems better to "discover" what percentage of people want to end their lives in current conditions and improve those conditions to improve that percentage. That might be a very powerful measure of how good we are doing with added benefit of not forcing suffering people to suffer longer.

roenxi|3 months ago

There are a lot of edge cases where suicide is rational. The experience of watching an 80 year old die over the course of a month or few can be quite harrowing from the reports I've had from people who've witnessed it; most of whom talk like they'd rather die in some other way. It's a scary thought, but we all die and there isn't any reason it has to be involuntary all the way to the bitter end.

It is quite difficult to say what moral framework an AI should be given. Morals are one of those big unsolved problems. Even basic ideas like maybe optimising for the general good if there are no major conflicting interests are hard to come to a consensus on. The public dialog is a crazy place.

cowsandmilk|3 months ago

The stories coming out are about convincing high school boys with impressionable brains into committing suicide, not about having intellectual conversations with 80 year olds about whether suicide to avoid gradual mental and physical decline makes sense.

Zobat|3 months ago

> We do not need more ways for people to be convinced of suicide.

I am convinced (no evidence though) that current LLMs has prevented, possibly lots of, suicides. I don't know if anyone has even tried to investigate or estimate those numbers. We should still strive to make them "safer" but with most tech there's positives and negatives. How many, for example, has calmed their nerves by getting in a car and driven for an hour alone and thus not committed suicide or murder.

That said there's the reverse for some pharmaceutical drugs. Take statins for cholesterol, lots of studies for how many deaths they prevent, few if any on comorbidity.

pjc50|3 months ago

> convinced (no evidence though)

In LLMs we call this "hallucination".

Peritract|3 months ago

Why are you convinced?