top | item 44510571

(no title)

hadao | 7 months ago

@SAI_Peregrinus Your comment perfectly illustrates the problem.

You're saying we should accept: - 30% accuracy for $200/month - Zero customer support as "not an advertised feature" - Being treated like we're dealing with a "gullible teenage intern on unlimited magic mushrooms"

This is exactly the predatory mindset I'm calling out. You want customers to voluntarily surrender their rights and lower their expectations to the floor.

When I pay $200/month, I'm not paying for a "magic mushroom teenager." I'm paying for a service that claims to be building "Constitutional AI" and "human values alignment."

If Anthropic wants to charge premium prices while delivering: - Hallucinations that cost real money - AI that calls customers "증명충" - 25 days of complete silence

Then they should advertise honestly: "We're selling an unreliable teenage intern for $200/month. No support included. You'll be mocked if you complain."

The fact that you think this is acceptable shows how normalized this exploitation has become.

We deserve better. And we should demand better.

discuss

order

SAI_Peregrinus|7 months ago

It's an LLM. It's as in touch with reality as teenage intern on magic mushrooms, by design. LLMs have no senses, no contact with the outside world except their chat box context window & occasional training of a new model. They hallucinate because that's all they can do, just like if you locked a human in a sensory deprivation tank with nothing but a chat box they'd hallucinate. All output of an LLM is a hallucination, some just happens to align with reality.

I want people to not pay these asshats $200/month, not to accept it blindly. I want people to understand that if support isn't advertised (no support link on the home page) that means there's no support. I want people to not trust LLMs blindly. I want people to not fall for scams. I don't expect to get what I want.