(no title)
DanielVZ | 3 months ago
Also we can’t deny the emotional element. Even though it is subjective, knowing that the reason your daughter didn’t seek guidance from you and committed suicide was because a chatbot convinced her of so must be gut wrenching. So far I’ve seen two instances of attempted suicide driven by AI in my small social circle. And it has made me support banning general AI usage at times.
Nowadays I’m not sure if it should or even could be banned, but we DO have to invest significant resources to improve alignment, otherwise we risk that in the future AI does more harm than good.
infecto|3 months ago
It is quite fascinating and I hope more studies exist that look into why some folks are more susceptible to this type of manipulation.
rafterydj|3 months ago
Reading accounts from people who fell into psychosis induced by LLMs feels like a real time mythological demon whispering insanities and temptations into the ear directly, in a way that algorithmically recommended posts from other people could never match.
It will naturally mimic your biases. It will find the most likely response for you to keep engaging with it. It will tell you everything you want to hear, even if it is not based in reality. In my mind it's the same dangers of social media but dialed all the way up to 11.
DanielVZ|3 months ago
Starting with dumb challenges that risk children and their families life.
And don’t get me started with how algorithms don’t care about the wellbeing of users, so if it’s depressing content that drives engagement, users life is just a tiny sacrifice in favor the companies profits.
iranintoavan|3 months ago
Well, it turns out all the social media companies are also the LLM companies and they are adding LLMs to social media, so....
joshtbradley|3 months ago
But I also think we should consider the broader context. Suicide isn’t new, and it’s been on the rise. I’ve suffered from very dark moments myself. It’s a deep, complex issue, inherently tied to technology. But it’s more than that. For me, it was not having an emotionally supportive environment that led to feelings of deep isolation. And it’s very likely that part of why I expanded beyond my container was because I had access to ideas on the internet that my parents never did.
I never consulted AI in these dark moments, I didn’t have the option, and honestly that may have been for the best.
And you might be right. Pointed bans, for certain groups and certain use cases might make sense. But I hear a lot of people calling for a global ban, and that concerns me.
Considering how we improve the broad context, I genuinely see AI as having potential for creating more aware, thoughtful, and supportive people. That’s just based on how I use AI personally, it genuinely helps me refine my character and process trauma. But I had to earn that ability through a lot of suffering and maturing.
I don’t really have a point. Other than admitting my original comment used logical fallacies, but I didn’t intend to diminish the complexity of this conversation. But I did. And it is clearly a very complex issue.
Dilettante_|3 months ago
Christ, that's a lot. My heart goes out to you and I understand if you prefer not to answer, but could you tell more about how the AI-aspect played out? How did you find out that AI was involved?
DanielVZ|3 months ago
> but could you tell more about how the AI-aspect played out?
So in summary the AI sycophantically agreed with how there was no way out of the situations and how nobody understood their position further isolating them. And when they contemplated suicide it did assist on the method selection with no issues whatsoever.
> How did you find out that AI was involved?
The victims mentioned it and the chat logs are there.
delaminator|3 months ago
david-gpu|3 months ago
namibj|3 months ago
scotty79|3 months ago
roenxi|3 months ago
It is quite difficult to say what moral framework an AI should be given. Morals are one of those big unsolved problems. Even basic ideas like maybe optimising for the general good if there are no major conflicting interests are hard to come to a consensus on. The public dialog is a crazy place.
cowsandmilk|3 months ago
Zobat|3 months ago
I am convinced (no evidence though) that current LLMs has prevented, possibly lots of, suicides. I don't know if anyone has even tried to investigate or estimate those numbers. We should still strive to make them "safer" but with most tech there's positives and negatives. How many, for example, has calmed their nerves by getting in a car and driven for an hour alone and thus not committed suicide or murder.
That said there's the reverse for some pharmaceutical drugs. Take statins for cholesterol, lots of studies for how many deaths they prevent, few if any on comorbidity.
pjc50|3 months ago
In LLMs we call this "hallucination".
Peritract|3 months ago