(no title)
Fernicia | 1 month ago
Their blogpost about the 5.1 personality update a few months ago showed how much of a pull this section of their customer base had. Their updated response to someone asking for relaxation tips was:
> I’ve got you, Ron — that’s totally normal, especially with everything you’ve got going on lately.
How does OpenAI get it so wrong, when Anthropic gets it so right?
burnte|1 month ago
I think it's because of two different operating theories. Anthropic is making tools to help people and to make money. OpenAI has a religious zealot driving it because they think they're on the cusp of real AGI and these aren't bugs but signals they're close. It's extremely difficulty to keep yourself in check and I think Altman no longer has a firm grasp on what it possible today.
The first principle is that you must not fool yourself, and you are the easiest person to fool. - Richard P. Feynman
realusername|1 month ago
embedding-shape|1 month ago
Are you saying people aren't having proto-social relationships with Anthorpic's models? Because I don't think that's true, seems people use ChatGPT, Claude, Grok and some other specific services too, although ChatGPT seems the most popular. Maybe that just reflects general LLM usage then?
Also, what is "wrong" here really? I feel like the whole concept is so new that it's hard to say for sure what is best for actual individuals. It seems like we ("humanity") are rushing into it, no doubt, and I guess we'll find out.
ryandrake|1 month ago
If we're talking generally about people having parasocial relationships with AI, then yea it's probably too early to deliver a verdict. If we're talking about AI helping to encourage suicide, I hope there isn't much disagreement that this is a bad thing that AI companies need to get a grip on.
palmotea|1 month ago
I think the term you're looking for is "parasocial."
Fernicia|1 month ago