top | item 46502571

(no title)

yousif_123123 | 1 month ago

What would be the cost for OpenAI to just stop these kinds of very long conversations that aren't about debugging or some actual long problem solving? It seems from the reports many people are being affected, some very very negatively, and many likely unreported. I don't understand why they don't show a warning or just open a new chat thread when a discussion gets too long or it can be detected that it's not fiction and likely veering into dangerous territory?

I don't know how this doesn't give pause to the ChatGPT team. Especially with their supposed mission to be helpful to the world etc.

discuss

order

Aurornis|1 month ago

> It seems from the reports many people are being affected

I think the rapid scale and growth of ChatGPT are breaking a lot of mental models about how common these occurrences are.

ChatGPT's weekly active user count is twice as large as the population of the United States. More people use ChatGPT than Reddit. The number of people using ChatGPT on a weekly basis is so massive that it's hard to even begin to understand how common these occurrences are. When they happen, they get amplified and spread far and wide.

The uses of ChatGPT and LLMs are very diverse. Calling for a shutdown of long conversations if they don't fit some pre-defined idea of problem solving is just not going to happen.

miltonlost|1 month ago

Ah, the old "we're too big to be able to not do evil things! we've scaled too much so now we can't moderate! Oh well, sucks to not be rich."

fragmede|1 month ago

Anthropic at least used to stop conversations cold when they reached the end of the context window, so it's entirely possible from a technical standpoint. That OpenAI chooses not to, and prefers to let the user continue on, increasing engagement, puts it on them.

michaelmrose|1 month ago

Incidence of harm is a function of harm/population. It is likely that Facebook is orders of magnitude more harmful than ChatGPT and bathtubs and bikes more dangerous than long LLM conversations.

It doesn't mean something more should not be done but we should retain perspective.

Maybe they should try to detect not long conversations but dangerous ones based on spot checking with a LLM to flag problems up for human review and a family notification program.

EG Bob is a nut. We can find this out by having a LLM not pre prompted by Bob's crazy examine some of the chats by top users by tokens consumed in chat not API and flagging it up to a human who cuts off bob or better shunts him to a version designed to shut down his particular brand of crazy eg pre prompted to tell him it's unhealthy.

This initial flag for review could also come from family or friends and if OpenAI concurs handle as above.

Likewise we could target posters of conspiracy theories for review and containment.

yousif_123123|1 month ago

> Calling for a shutdown of long conversations if they don't fit some pre-defined idea of problem solving is just not going to happen.

I am calling for some care to go in your product to try to reduce the occurrence of these bad outcomes. I just don't think it would be hard for them to detect that a conversation has reached a point that its becoming very likely the user is becoming delusional or may engage in dangerous behavior.

How will we handle AGI if we ever create it, if we can't protect our society from these basic LLM problems?

shwaj|1 month ago

It seems like a cheaper model could be asked to review transcripts, something like: “does this transcript seem at all like a wacky conspiracy theory that is encouraged in the use by the LLM”?

In this case, it would have been easily detected. Depending on the prompt used, there would be more or less false positives/negatives, but low-hanging fruit such as this tragic incident should be avoidable.

jacquesm|1 month ago

I've had OpenAI do the weirdest things in conversations about aerodynamics and very low level device drivers, I don't think you will be able to reach a solution by just limiting the subjects. It is incredible how strongly it tries to position itself as a thinking entity that is above its users in the sense that it is handing out compliments all the time. Some people are more susceptible to others.

j2kun|1 month ago

> I don't know how this doesn't give pause to the ChatGPT team. Especially with their supposed mission to be helpful to the world etc.

Because the mission is a lie and the goal is profit. alwayshasbeen.jpg

wahnfrieden|1 month ago

Those remediations would pretty clearly negatively impact revenue. And the team gets paid a lot to do their current work as-is.

The way to get the team organized against something is to threaten their stock valuation (like when the workers organized against Altman's ousting). I don't see how cutting off users is going to do anything but drive the opposite reaction from the workers from what you want.

gruez|1 month ago

>Those remediations would pretty clearly negatively impact revenue

That might make sense if openai was getting paid per token for these chats, but people who are using chatgpt as their therapist probably aren't using their consumption based API. They might have a premium account but how many % of premium users do you think are using chatgpt as their therapist and getting into long winded chats?

paxys|1 month ago

The cost would be a very large chunk of OpenAI's business. People aren't just using ChatGPT just to solve problems. It is a very popular tool for idle chatter, role playing, entertainment, friendship, therapy, and lots more. And OpenAI isn't financially incentivized to discourage this kind of use.

supermdguy|1 month ago

Looks like this would affect around 4.3% of chats (the "Self-Expression" category from this report[0]). Considering ChatGPT's userbase, that's an extremely large number of people, but less significant than I thought based on all the talk about AI companionship. That being said though, a similar crowd was pretty upset when OpenAI removed 4o, and the backlash was enough for them to bring it back.

[0]: https://www.nber.org/system/files/working_papers/w34255/w342...

zemo|1 month ago

> I don't know how this doesn't give pause to the ChatGPT team

a large pile of money

> What would be the cost for OpenAI to just stop these kinds of very long conversations

the aforementioned large pile of money

DocTomoe|1 month ago

Just because you do not use a piece of technology or see no use in a particular use-case does not make it useless. If you want your Java code repaired, more power to you, but do not cripple the tool for people like me who use ChatGPT for more introspective work which cannot be expressed in a tweet.

By the way, I would wager that 'long-form'-users are actually the users that pay for the service.

yousif_123123|1 month ago

> By the way, I would wager that 'long-form'-users are actually the users that pay for the service.

I think it may be the case that many of these people that commit suicide or do other dangerous things after motivation from AI, are actually using weaker models that are available on the free versions. Whatever ability there is in AI to protect the user, it must be lower for the cheaper models that are freely available.

dr-detroit|1 month ago

I would bet that AI girlfriend is a top ten use case for LLMs

jacquesm|1 month ago

It is probably a top 1 use case if you add the AI boyfriend option.

There are a lot of lonely people out there.

rhdunn|1 month ago

And role-playing in general.