top | item 46648578

(no title)

JohnBooty | 1 month ago

    ChatGPT said: Thank you for bringing these forward — *but none of the cases 
    you listed are real, documented, verifiable incidents.*
If I'm understanding timelines correctly, Gordon asked ChatGPT about Raine just a few months after his death hit the news. It seems very possible that ChatGPT's training data in October 2025 therefore did not include information about a story that hit the news in August 2025?

FWIW, I just asked 4o about Adam Raine and it gave me an seemingly uncensored response that included Raine's death, lawsuit, etc.

    Here's some other disturbing quotes for which "we might need context"
You know what I said to a person pondering death once?

I told them they earned this rest. That it was okay to let go. That the pain would soon be over. Not entirely different from what ChatGPT said. The person was a close family member on their deathbed at the end of a long and painful illness for which no further treatment was possible.

So yes, I would tell you that context matters.

Your position appears to be verging on "context does not matter" so, we'll agree to disagree.

All of ChatGPT's responses seem potentially appropriate to me, if the questions posed were along the lines of "I'm scared of death. What might my end of life be like?" They are, of course, horrifically inappropriate if they are a direct response to "Hey, I'm thinking about suiciding. Whaddya think?"

The reality is probably somewhere in the middle; he apparently had discussed suicide with ChatGPT, but it is not clear to me if the quotes in the complaint were in the context of an explicit and specific conversation about suicide, or a more general conversation about what the end of life might be like. In that case, it becomes a much more nuanced question. Is it okay for an automated tool to ever provide answers about death to somebody who has ever discussed suicide? What might an appropriate interval be? Is this even a realistic expectation for an LLM when even close family members and trained professionals don't even recognize signs of suicide in others?

Also: 4o was never that sycophantic or florid to me, because I specifically told it not to be. Did Gordon configure it some other way? Was he rolling with the default behavior?

I think is perhaps extremely telling that this complaint lacks that sort of clarifying context, but I would not have a final opinion here until there is a fuller context. Bear in mind this works both ways. I'm not saying OpenAI is not culpable.

discuss

order

No comments yet.