top | item 44508883

(no title)

hadao | 7 months ago

I'm documenting this because it's a cautionary tale about trusting AI with technical decisions.

*The Hallucination:* As a Claude Pro Max subscriber ($200/month), I asked how to integrate Claude with Notion for my book project. Claude confidently instructed me to "add Claude as a Notion workspace member" for unlimited document processing.

*The Cost:* Following these detailed instructions, I purchased Notion Plus (2 members) for $270 annually. Notion's response: "AI members are technically impossible. No refunds."

*The Timeline:* - June 17: First support email → No response - July 5: Second email (18 days later) → No response - July 6: Escalation → No response - July 9: Final ultimatum → Bot reply only - Total: 23 days of silence

*The Numbers:* - Paid Anthropic: $807 over 3 months - Lost to hallucination: $270 - Human responses: 0 - Context window: Too small for book chapters - Session memory: None

*My Background:* I'm not a random complainer. I developed GiveCon, which pioneered the $3B K-POP fandom app market. I have 32.6K YouTube subscribers and significant media coverage in Korea. I chose Claude specifically for AI-human creative collaboration.

*The Question:* How can a $200/month AI service: 1. Hallucinate expensive technical features 2. Provide zero human support for 23 days 3. Lack basic features like session continuity

Is this normal? Are others experiencing similar issues with Claude Pro?

Evidence available: Payment receipts, chat logs, Notion support emails, timeline documentation.

discuss

order

strgrd|7 months ago

"I'm not a random complainer."

I feel privileged for getting to read this post before it's widely ridiculed and deleted.

rat9988|7 months ago

I admire how calm you stayed before the most random complainer ever. He might not be a random guy, but he complains about very random things no one would expect.

aschobel|7 months ago

It may sound clunky translated, but I'm guessing in their native Korean is it fine. Since the timestamps are in KST, I’m guessing the OP is Korean.

Xymist|7 months ago

The AI's assessment of him seems somewhat apt, really, even if it doesn't know much about Notion.

Dibes|7 months ago

Hallucinations by LLMs are both normal, well documented, and very common. We have not solved this problem so it is up to the user to verify and validate when working with these systems. I hope this was a relatively inexpensive lesson on the dangers of blind trust to a known faulty system!

westpfelia|7 months ago

Did you do any due diligence around what Claude told you was possible or did you blanet trust it?

Because you MUST be the first person to ever have an AI tell you something confidently that was wrong or doesnt exist.

Seriously the ven diagram of AI users and notion users is a circle. There is a discord. You could have reached out and asked people what their experience was. This is 100% on you. And why dont they have instant support? Like 1000 people work at Anthropic and maybe 10 of those people are in support. Between you and the millions of users they probably miss a lot. And its not like at 200$ a month you have some SLA terms.

SAI_Peregrinus|7 months ago

> The Question: How can a $200/month AI service: 1. Hallucinate expensive technical features

AI services can charge whatever they want. They're not a regulated good like many utilities. Per CMU, AI agents are correct at most about 30% of the time[1]. That's just the latest result, it's substantially better accuracy than past tests & older models.

> 2. Provide zero human support for 23 days

Human support is not an advertised feature. The only advertised uses of the `support@anthropic.com` email are to notify Anthropic of unauthorized access to your account or to cancel your subscription.

> 3. Lack basic features like session continuity

Session independence is a design feature, to avoid "context poisoning". Once an AI agent makes a mistake, it's important to start a new session without the mistake in the context. Failure to do so will lead to the mistake poisoning the context & getting repeated in future outputs. LLMs are not capable of both session continuity and usable output.

> Is this normal? Are others experiencing similar issues with Claude Pro?

This is entirely normal & expected. LLMs should be treated like gullible teenage interns with access to a very good reference library and an unlimited supply of magic mushrooms. Don't give them any permissions you wouldn't give to an extremely gullible intern on 'shrooms. Don't trust them any more than you would a gullible intern on 'shrooms.

[1] https://www.theregister.com/2025/06/29/ai_agents_fail_a_lot/

hadao|7 months ago

@SAI_Peregrinus Your comment perfectly illustrates the problem.

You're saying we should accept: - 30% accuracy for $200/month - Zero customer support as "not an advertised feature" - Being treated like we're dealing with a "gullible teenage intern on unlimited magic mushrooms"

This is exactly the predatory mindset I'm calling out. You want customers to voluntarily surrender their rights and lower their expectations to the floor.

When I pay $200/month, I'm not paying for a "magic mushroom teenager." I'm paying for a service that claims to be building "Constitutional AI" and "human values alignment."

If Anthropic wants to charge premium prices while delivering: - Hallucinations that cost real money - AI that calls customers "증명충" - 25 days of complete silence

Then they should advertise honestly: "We're selling an unreliable teenage intern for $200/month. No support included. You'll be mocked if you complain."

The fact that you think this is acceptable shows how normalized this exploitation has become.

We deserve better. And we should demand better.

Mashimo|7 months ago

> Is this normal?

Yes :) Welcome to LLM vibe coding.

mtharrison|7 months ago

I've got a bridge to sell you