top | item 46639730

(no title)

Fernicia | 1 month ago

OpenAI keeping 4o available in ChatGPT was, in my opinion, a sad case of audience capture. The outpouring from some subreddit communities showed how many people had been seduced by its sycophancy and had formed proto-social relationships with it.

Their blogpost about the 5.1 personality update a few months ago showed how much of a pull this section of their customer base had. Their updated response to someone asking for relaxation tips was:

> I’ve got you, Ron — that’s totally normal, especially with everything you’ve got going on lately.

How does OpenAI get it so wrong, when Anthropic gets it so right?

discuss

order

burnte|1 month ago

> How does OpenAI get it so wrong, when Anthropic gets it so right?

I think it's because of two different operating theories. Anthropic is making tools to help people and to make money. OpenAI has a religious zealot driving it because they think they're on the cusp of real AGI and these aren't bugs but signals they're close. It's extremely difficulty to keep yourself in check and I think Altman no longer has a firm grasp on what it possible today.

The first principle is that you must not fool yourself, and you are the easiest person to fool. - Richard P. Feynman

realusername|1 month ago

I think even Altman himself must know the AGI story is bogus and there to continue to prop up the bubble.

embedding-shape|1 month ago

> How does OpenAI get it so wrong, when Anthropic gets it so right?

Are you saying people aren't having proto-social relationships with Anthorpic's models? Because I don't think that's true, seems people use ChatGPT, Claude, Grok and some other specific services too, although ChatGPT seems the most popular. Maybe that just reflects general LLM usage then?

Also, what is "wrong" here really? I feel like the whole concept is so new that it's hard to say for sure what is best for actual individuals. It seems like we ("humanity") are rushing into it, no doubt, and I guess we'll find out.

ryandrake|1 month ago

> Also, what is "wrong" here really?

If we're talking generally about people having parasocial relationships with AI, then yea it's probably too early to deliver a verdict. If we're talking about AI helping to encourage suicide, I hope there isn't much disagreement that this is a bad thing that AI companies need to get a grip on.

palmotea|1 month ago

> and had formed proto-social relationships with it.

I think the term you're looking for is "parasocial."