(no title)
yousif_123123 | 1 month ago
I don't know how this doesn't give pause to the ChatGPT team. Especially with their supposed mission to be helpful to the world etc.
yousif_123123 | 1 month ago
I don't know how this doesn't give pause to the ChatGPT team. Especially with their supposed mission to be helpful to the world etc.
Aurornis|1 month ago
I think the rapid scale and growth of ChatGPT are breaking a lot of mental models about how common these occurrences are.
ChatGPT's weekly active user count is twice as large as the population of the United States. More people use ChatGPT than Reddit. The number of people using ChatGPT on a weekly basis is so massive that it's hard to even begin to understand how common these occurrences are. When they happen, they get amplified and spread far and wide.
The uses of ChatGPT and LLMs are very diverse. Calling for a shutdown of long conversations if they don't fit some pre-defined idea of problem solving is just not going to happen.
miltonlost|1 month ago
fragmede|1 month ago
michaelmrose|1 month ago
It doesn't mean something more should not be done but we should retain perspective.
Maybe they should try to detect not long conversations but dangerous ones based on spot checking with a LLM to flag problems up for human review and a family notification program.
EG Bob is a nut. We can find this out by having a LLM not pre prompted by Bob's crazy examine some of the chats by top users by tokens consumed in chat not API and flagging it up to a human who cuts off bob or better shunts him to a version designed to shut down his particular brand of crazy eg pre prompted to tell him it's unhealthy.
This initial flag for review could also come from family or friends and if OpenAI concurs handle as above.
Likewise we could target posters of conspiracy theories for review and containment.
yousif_123123|1 month ago
I am calling for some care to go in your product to try to reduce the occurrence of these bad outcomes. I just don't think it would be hard for them to detect that a conversation has reached a point that its becoming very likely the user is becoming delusional or may engage in dangerous behavior.
How will we handle AGI if we ever create it, if we can't protect our society from these basic LLM problems?
shwaj|1 month ago
In this case, it would have been easily detected. Depending on the prompt used, there would be more or less false positives/negatives, but low-hanging fruit such as this tragic incident should be avoidable.
jacquesm|1 month ago
j2kun|1 month ago
Because the mission is a lie and the goal is profit. alwayshasbeen.jpg
wahnfrieden|1 month ago
The way to get the team organized against something is to threaten their stock valuation (like when the workers organized against Altman's ousting). I don't see how cutting off users is going to do anything but drive the opposite reaction from the workers from what you want.
gruez|1 month ago
That might make sense if openai was getting paid per token for these chats, but people who are using chatgpt as their therapist probably aren't using their consumption based API. They might have a premium account but how many % of premium users do you think are using chatgpt as their therapist and getting into long winded chats?
paxys|1 month ago
supermdguy|1 month ago
[0]: https://www.nber.org/system/files/working_papers/w34255/w342...
zemo|1 month ago
a large pile of money
> What would be the cost for OpenAI to just stop these kinds of very long conversations
the aforementioned large pile of money
DocTomoe|1 month ago
By the way, I would wager that 'long-form'-users are actually the users that pay for the service.
yousif_123123|1 month ago
I think it may be the case that many of these people that commit suicide or do other dangerous things after motivation from AI, are actually using weaker models that are available on the free versions. Whatever ability there is in AI to protect the user, it must be lower for the cheaper models that are freely available.
dr-detroit|1 month ago
jacquesm|1 month ago
There are a lot of lonely people out there.
rhdunn|1 month ago