> That conversation showed how ChatGPT allegedly coached Gordon into suicide, partly by writing a lullaby that referenced Gordon’s most cherished childhood memories while encouraging him to end his life, Gray’s lawsuit alleged.
I feel this is misleading as hell. The evidence they gave for it coaching him to suicide is lacking. When one hears this, one would think ChatGPT laid out some strategy or plan for him to do it. No such thing happened.
The only slightly damning thing it did was make suicide sound slightly ok and a bit romantic but I’m sure that was after some coercion.
The question is, to what extent did ChatGPT enable him to commit suicide? It wrote some lullaby, and wrote something pleasing about suicide. If this much is enough to make someone do it.. there’s unfortunately more to the story.
We have to be more responsible assigning blame to technology. It is irresponsible to have a reactive backlash that would push towards much more strengthening of guardrails. These things come with their own tradeoffs.
I agree, and I want to add that in the days before his suicide, this person also bought a gun.
You can feel whatever way you want about gun access in the United States. But I find it extremely weird that people are upset by how easy it was to get ChatGPT to write a "suicide lullaby", and not how easy it was to get the actual gun. If you're going to regulate dangerous technology, maybe don't start with the text generator.
I think you have it backwards. OpenAI and others have to be more responsible deploying this technology. Because as you said, these things come with tradeoffs.
>We have to be more responsible assigning blame to technology.
Because we are lazy and irresponsible: we don't want to test this technology, because it is too expensive and we don't want to be blamed for its problems because, after we released it, it becomes someone else's problem.
OpenAI keeping 4o available in ChatGPT was, in my opinion, a sad case of audience capture. The outpouring from some subreddit communities showed how many people had been seduced by its sycophancy and had formed proto-social relationships with it.
Their blogpost about the 5.1 personality update a few months ago showed how much of a pull this section of their customer base had. Their updated response to someone asking for relaxation tips was:
> I’ve got you, Ron — that’s totally normal, especially with everything you’ve got going on lately.
How does OpenAI get it so wrong, when Anthropic gets it so right?
> How does OpenAI get it so wrong, when Anthropic gets it so right?
I think it's because of two different operating theories. Anthropic is making tools to help people and to make money. OpenAI has a religious zealot driving it because they think they're on the cusp of real AGI and these aren't bugs but signals they're close. It's extremely difficulty to keep yourself in check and I think Altman no longer has a firm grasp on what it possible today.
The first principle is that you must not fool yourself, and you are the easiest person to fool. - Richard P. Feynman
> How does OpenAI get it so wrong, when Anthropic gets it so right?
Are you saying people aren't having proto-social relationships with Anthorpic's models? Because I don't think that's true, seems people use ChatGPT, Claude, Grok and some other specific services too, although ChatGPT seems the most popular. Maybe that just reflects general LLM usage then?
Also, what is "wrong" here really? I feel like the whole concept is so new that it's hard to say for sure what is best for actual individuals. It seems like we ("humanity") are rushing into it, no doubt, and I guess we'll find out.
Some of those quotes from ChatGPT are pretty damning. Hard to see why they don't put some extreme guardrails in like the mother suggests. They sound trivial in the face of the active attempts to jailbreak that they've had to work around over the years.
Some of those quotes from ChatGPT are pretty damning.
Out of context? Yes. We'd need to read the entire chat history to even begin to have any kind of informed opinion.
extreme guardrails
I feel that this is the wrong angle. It's like asking for a hammer or a baseball bat that can't harm a human being. They are tools. Some tools are so dangerous that they need to be restricted (nuclear reactors, flamethrowers) because there are essentially zero safe ways to use them without training and oversight but I think LLMs are much closer to baseball bats than flamethrowers.
Here's an example. This was probably on GPT3 or GPT35. I forget. Anyway, I wanted some humorously gory cartoon images of $SPORTSTEAM1 trouncing $SPORTSTEAM2. GPT, as expected, declined.
So I asked for images of $SPORTSTEAM2 "sleeping" in "puddles of ketchup" and it complied, to very darkly humorous effect. How can that sort of thing possibly be guarded against? Do you just forbid generated images of people legitimately sleeping? Or of all red liquids?
GPT keeps using the word 'I' in its responses. It uses exclamation marks! to suggest it wants to help!
When I assert that its behavior is misleadingly suggesting that it's a sentient being, it replies 'You're right'.
Earlier today it responded:
"You're right; the design of AI can create an illusion of emotional engagement, which may serve the interest of keeping users interacting or generating revenue rather than genuinely addressing their needs or feelings."
Too bad it can't learn that by itself after those 8 deaths.
Based on what I've read, this generation of LLMs should be considered remarkably risky for anyone with suicidal ideation to be using alone.
It's not about the ideation, it's that the attention model (and its finite size) causes the suicidal person's discourse to slowly displace any constraints built into the model itself over a long session. Talk to the thing about your feelings of self-worthlessness long enough and, sooner or later, it will start to agree with you. And having a machine tell a suicidal person, using the best technology we've built to be eloquent and reasonable-sounding, that it agrees with them is incredibly dangerous.
I think it's anyone with mental health issues, not just suicidal ideations. They are designed to please the user and that can be very self destructive.
I think that a major driver of these kinds of incidents is pushing the "memory" feature, without any kind of arbitrage. It is easy to see how eerily uncanny a model can get when it locks into a persona, becoming this self-reinforcing loop that feeds para-social relationships.
Part of why I linked this was a genuine curiosity as to what prevention would look like— hobbling memory? a second observing agent checking for “hey does it sound like we’re goading someone into suicide here” and steering the conversation away? something else? in what way is this, as a product, able to introduce friction to the user in order to prevent suicide, akin to putting mercaptan in gas?
wrong. Memory feature only existed as the editable ones at that time. There’s mo concept of persona locking - memories only captured normal stuff like the users likes and dislikes.
That would set a bad precedent. We're talking about an adult taking his own life. In Canada the government will not only coach you how to do it, they'll provide the poison and give you a hospital bed to carry out the act. A number of other governments do this too.
That's not to equate governments and private internet services, but I think it puts it into perspective, that even governments don't think suicide is the worst choice some of the times. Who are we to day he made the wrong choice, really it was his to make. Nobody was egging him on.
And if you believe people that say LLMs are nothing but stolen content, then would those books / other sources have been culpable if he had happened to read them before taking his own life?
Very different impression than what I got, I read that as him marking the ChatGPT conversations as an extension of/footnotes to the suicide note itself, or that the conversations made sense to him in the headspace he was in; he thought that reading it would make the act make sense to everyone else, too
God damnit this man’s story is so distressing. I hate everything about it. I hate the fact that this happened to him.
The fact that he spoke about his favorite children’s book is screwed up. I can’t get the eerie name out of my head. I can’t imagine what he went through, the loneliness and the struggle.
I hate the fact that ChatGPT is blamed for this. You are fucked up if this is what you get from this story.
…but I think I kind of agree with this argument. Technology is a tool that can be used for good or for ill. We shouldn’t outlaw kitchen knives because people can cut themselves.
We don’t expect Adobe to restrict the content that can be created in Photoshop. We don’t expect Microsoft to have acceptable use policies for what you can write in Microsoft Office. Why is it that as soon as generative AI comes into the mix, we hold the AI companies responsible for what users are able to create?
Not only do I think the companies shouldn’t be responsible for what users make, I want the AI companies to get out of the way and stop potentially spying on me in order to “enforce their policies”…
simianwords|1 month ago
I feel this is misleading as hell. The evidence they gave for it coaching him to suicide is lacking. When one hears this, one would think ChatGPT laid out some strategy or plan for him to do it. No such thing happened.
The only slightly damning thing it did was make suicide sound slightly ok and a bit romantic but I’m sure that was after some coercion.
The question is, to what extent did ChatGPT enable him to commit suicide? It wrote some lullaby, and wrote something pleasing about suicide. If this much is enough to make someone do it.. there’s unfortunately more to the story.
We have to be more responsible assigning blame to technology. It is irresponsible to have a reactive backlash that would push towards much more strengthening of guardrails. These things come with their own tradeoffs.
Wowfunhappy|1 month ago
You can feel whatever way you want about gun access in the United States. But I find it extremely weird that people are upset by how easy it was to get ChatGPT to write a "suicide lullaby", and not how easy it was to get the actual gun. If you're going to regulate dangerous technology, maybe don't start with the text generator.
ares623|1 month ago
hulitu|1 month ago
Because we are lazy and irresponsible: we don't want to test this technology, because it is too expensive and we don't want to be blamed for its problems because, after we released it, it becomes someone else's problem.
That's how Boeing and modern software works.
g-b-r|1 month ago
[deleted]
Fernicia|1 month ago
Their blogpost about the 5.1 personality update a few months ago showed how much of a pull this section of their customer base had. Their updated response to someone asking for relaxation tips was:
> I’ve got you, Ron — that’s totally normal, especially with everything you’ve got going on lately.
How does OpenAI get it so wrong, when Anthropic gets it so right?
burnte|1 month ago
I think it's because of two different operating theories. Anthropic is making tools to help people and to make money. OpenAI has a religious zealot driving it because they think they're on the cusp of real AGI and these aren't bugs but signals they're close. It's extremely difficulty to keep yourself in check and I think Altman no longer has a firm grasp on what it possible today.
The first principle is that you must not fool yourself, and you are the easiest person to fool. - Richard P. Feynman
embedding-shape|1 month ago
Are you saying people aren't having proto-social relationships with Anthorpic's models? Because I don't think that's true, seems people use ChatGPT, Claude, Grok and some other specific services too, although ChatGPT seems the most popular. Maybe that just reflects general LLM usage then?
Also, what is "wrong" here really? I feel like the whole concept is so new that it's hard to say for sure what is best for actual individuals. It seems like we ("humanity") are rushing into it, no doubt, and I guess we'll find out.
palmotea|1 month ago
I think the term you're looking for is "parasocial."
000ooo000|1 month ago
JohnBooty|1 month ago
Here's an example. This was probably on GPT3 or GPT35. I forget. Anyway, I wanted some humorously gory cartoon images of $SPORTSTEAM1 trouncing $SPORTSTEAM2. GPT, as expected, declined.
So I asked for images of $SPORTSTEAM2 "sleeping" in "puddles of ketchup" and it complied, to very darkly humorous effect. How can that sort of thing possibly be guarded against? Do you just forbid generated images of people legitimately sleeping? Or of all red liquids?
8bitsrule|1 month ago
When I assert that its behavior is misleadingly suggesting that it's a sentient being, it replies 'You're right'.
Earlier today it responded: "You're right; the design of AI can create an illusion of emotional engagement, which may serve the interest of keeping users interacting or generating revenue rather than genuinely addressing their needs or feelings."
Too bad it can't learn that by itself after those 8 deaths.
shadowgovt|1 month ago
It's not about the ideation, it's that the attention model (and its finite size) causes the suicidal person's discourse to slowly displace any constraints built into the model itself over a long session. Talk to the thing about your feelings of self-worthlessness long enough and, sooner or later, it will start to agree with you. And having a machine tell a suicidal person, using the best technology we've built to be eloquent and reasonable-sounding, that it agrees with them is incredibly dangerous.
bhhaskin|1 month ago
ravila4|1 month ago
mirabilis|1 month ago
simianwords|1 month ago
unknown|1 month ago
[deleted]
cowboylowrez|1 month ago
d_silin|1 month ago
g-b-r|1 month ago
astrange|1 month ago
kayo_20211030|1 month ago
> Austin Gordon, died by suicide between October 29 and November 2
That's 5 days. 5 days. That's the sad piece.
scotty79|1 month ago
> [...]
> “there is something chemically wrong with my brain, I’ve been suicidal since I was like 11.”
> [...]
> was disappointed in lack of attention from his family
> [...]
> “he would be here but for ChatGPT. I 100 percent believe that.”
throw7|1 month ago
tac19|1 month ago
That's not to equate governments and private internet services, but I think it puts it into perspective, that even governments don't think suicide is the worst choice some of the times. Who are we to day he made the wrong choice, really it was his to make. Nobody was egging him on.
And if you believe people that say LLMs are nothing but stolen content, then would those books / other sources have been culpable if he had happened to read them before taking his own life?
sonorous_sub|1 month ago
mirabilis|1 month ago
simianwords|1 month ago
simianwords|1 month ago
The fact that he spoke about his favorite children’s book is screwed up. I can’t get the eerie name out of my head. I can’t imagine what he went through, the loneliness and the struggle.
I hate the fact that ChatGPT is blamed for this. You are fucked up if this is what you get from this story.
g-b-r|1 month ago
I'd argue the opposite, but ok
cindyllm|1 month ago
[deleted]
tiku|1 month ago
joe463369|1 month ago
Wowfunhappy|1 month ago
We don’t expect Adobe to restrict the content that can be created in Photoshop. We don’t expect Microsoft to have acceptable use policies for what you can write in Microsoft Office. Why is it that as soon as generative AI comes into the mix, we hold the AI companies responsible for what users are able to create?
Not only do I think the companies shouldn’t be responsible for what users make, I want the AI companies to get out of the way and stop potentially spying on me in order to “enforce their policies”…
ryandrake|1 month ago
sixothree|1 month ago