top | item 43700931

(no title)

mntruell | 10 months ago

(Cursor cofounder)

Apologies - something very clearly went wrong here. We’ve already begun investigating, and some very early results:

* Any AI responses used for email support are now clearly labeled as such. We use AI-assisted responses as the first filter for email support.

* We’ve made sure this user is completely refunded - least we can do for the trouble.

For context, this user’s complaint was the result of a race condition that appears on very slow internet connections. The race leads to a bunch of unneeded sessions being created which crowds out the real sessions. We’ve rolled out a fix.

Appreciate all the feedback. Will help improve the experience for future users.

discuss

order

nextaccountic|10 months ago

Why did you remove this thread?

https://old.reddit.com/r/cursor/comments/1jyy5am/psa_cursor_...

(For reference, here it is in reveddit https://www.reveddit.com/v/cursor/comments/1jyy5am/psa_curso... - text from post was unfortunately not saved)

It's already locked and with a stickied comment from a dev clarifying what happened

Did you remove it so people can't find about this screwup when searching Google?

Anyway, if you acknowledge it was a mistake to remove the thread, could you please un-remove it?

PrayagS|10 months ago

The whole subreddit is moderated poorly. I’ve seen plenty of users post on r/LocalLlama about how something negative or constructive they said on the Cursor sub was just removed.

AyyEye|10 months ago

Why would anyone trust you?

The best case scenario is that you lied about having people answer support. LLMs pretending to be people (you named it Sam!) and not labeled as such is clearly intended to be deceptive. Then you tried to control the narrative on reddit. So forgive me if I hit that big red DOUBT button.

Even in your post you call it "AI-assisted responses" which is as weaselly as it gets. Was it a chatbot response or was a human involved?

But 'a chatbot messed up' doesn't explain how users got locked out in the first place. EDIT: I see your comment about the race condition now. Plausible but questionable.

So the other possible scenario is that you tried to hose your paying customers then when you saw the blowback blamed it on a bot.

'We missed the mark' is such a trope non-apology. Write a better one.

I had originally ended this post with "get real" but your company's entire goal is to replace the real with the simulated so I guess "you get what you had coming". Maybe let your chatbots write more crap code that your fake software engineers push to paying customers that then get ignored and/or lied to when they ask your chatbots for help. Or just lie to everyone when you see blowback. Whatever. Not my problem yet because I can write code well enough that I'm embarrassed for my entire industry whenever I see the output from tools like yours.

This whole "AI" psyop is morally bankrupt and the world would be better off without it.

PoignardAzur|10 months ago

> The best case scenario is that you lied about having people answer support. LLMs pretending to be people (you named it Sam!) and not labeled as such is clearly intended to be deceptive.

Also, illegal in the EU.

jackaroe420|10 months ago

I don't know who you are but you said this so well!

Azeralthefallen|10 months ago

Hi since i know you will never respond to this or hear this.

We spent almost 2 months fighting with you guys about basic questions any B2B SaaS should be able to answer us. Things such as invoicing, contracts, and security policies. This was for a low 6 figure MRR deal.

When your sales rep responds "I don't know" or "I will need to get back to you" for weeks about basic questions it left us with a massive disappointment. Please do better, however we have moved to Copilot.

dspillett|10 months ago

> Any AI responses used for email support are now clearly labeled as such.

Because we all know how well people pay attention to such clear labels, even seasoned devs not just “end users”⁰.

Also, deleting public view of the issue (locking & hiding the reddit thread) tells me a lot about how much I should trust the company and its products, and as such I will continue to not use them.

--------

[0] though here there the end users are devs

Snakes3727|10 months ago

I do truely love how you guys even went so far to hide and lock the post from Reddit.

This person is not the only one to experiencing this bug. As this thread has pointed out.

KennyBlanken|10 months ago

I wish more people realized that virtually any subreddit for a company or product is run by the company - either directly or via a firm that specializes in 'sentiment analysis and management' or whatever the marketdroids call it these days. Even if they don't remove posts via moderation, they'll just hammer it with downvotes from sockpuppet accounts.

HN goes a step further. It has a function that allows moderators to kill or boost a post by subtracting or adding a large amount to the post's score. HN is primarily a place for Y Combinator to hype their latest venture, and a "safe" place for other startups and tech companies.

patcon|10 months ago

Agreed, this is what's infuriating: insistence on control.

They will utterly fail to build for a community of users if they don't have anyone on-hand who can tell them what a terrible idea that was

To the cofounder: hire someone (ideally with some thoughtful reluctance around AI, who understands what's potentially lost in using it) who will tell you your ideas around this are terrible. Hire this person before you fuck up your position in benevolent leadership of this new field

petesergeant|10 months ago

I dunno, that seems pretty reasonable to me simply for stopping the spread of misinformation. The main story will absolutely get written up by some smaller news sources, but is it really a benefit for someone facing a similar issue in the future to find an outdated and probably confusing Reddit post about it?

slotrans|10 months ago

> We use AI-assisted responses as the first filter for email support.

Literally no one wants this. The entire purpose of contacting support is to get help from a human.

fragmede|10 months ago

Sorta? I mean I want my problem fixed, regardless of it it's a person or not. Having a person listen to me complain about my problems might sooth my conscience, but I can't pay my bill or why was it so high; having those answered by a system that is contextualized to my problem sand is empowered to fix it, and not just a talking to a brick wall? I wouldn't say totally fine, but at the end of the day, if my problem is solved or my query, even if it's weird, I can't say I really needed for the voice on the other end of the pHone to come from a human. If a companies business model isn't sustainable without using AI agents, it's not really my problem that it's not, but also if I'm using their product, presumably I don't want that to go away.

hartator|10 months ago

> For context, this user’s complaint was the result of a race condition that appears on very slow internet connections.

Seems like you are still blaming the user for his “very slow internet”.

How do you know the user internet was slow? Couldn’t a race condition like this exist anyway with regular 2 fast internet connections competing for the same sessions?

Something doesn’t add up.

mritchie712|10 months ago

huh?

this is a completely reasonable and seemingly quite transparent explaination.

if you want a conspiracy, there are better places to look.

eranation|10 months ago

Side note... I'm a paying enterprise customer who moved all my team to cursor and have to say I'm considering canceling due to the non existent support. For example Cursor will create new files instead of edit an existing one when you have a workspace with multiple folders in a monorepo...

geuis|10 months ago

Why in all of hades would you force your entire eng org to only use one LLM provider. It's incredibly easy to run this stuff locally on 4+ year old hardware. Why is this even something you're spending company money on? Investor funds?

hakaneskici|10 months ago

Hi Michael,

Slightly related to this; I just wanted to ask whether all Cursor email inboxes are gated by AI agents? I've tried to contact Cursor via email a few times in the past, but haven't even received an AI response :)

Cheers!

mntruell|10 months ago

Not all of them (e.g. security@)! But our support system currently is. We are standing up a much bigger team here but are behind where we should be.

adenta|10 months ago

You’ve promised a ton of people refunds that never got them. Others in this thread, and myself included

Edit: he did refund 22 mins after seeing this

krzat|10 months ago

you didn't get a refund because the promise of refund was also hallucinated.

PoignardAzur|10 months ago

Maybe wait more than an hour before implying the refunds were a lie all along.

makingstuffs|10 months ago

Yeah I got asked for feedback and offered a refund when I cancelled. Never got any reply after. Guess it was AI slop

PUSH_AX|10 months ago

It's a real shame that your team deletes threads like this in instances where they have control (eg they are mods on the subreddit). Part of me wonders if you had a magic wand would you have just deleted this too, but you're forced to chime in now because you don't.

ach9l|10 months ago

so the actual implementation of the code to log people off was also hallucination? the enforcement too? all the way to a production environment? is this safe, or just a virtual scape goat?

Ukv|10 months ago

To my understanding there weren't really distinct "implementation of the code to log people off" and "enforcement" - just a bug where previous sessions were being expired when a new one was created.

That an LLM then invented a reason when asked by users why they're being logged out isn't that surprising. While not impossible, I don't think there's currently indication that they intended to change policy and are just blaming it on a hallucination as a scape goat.

ph4evers|10 months ago

Keep going! I love Cursor. Don’t let the haters get to you

redbell|10 months ago

> Any AI responses used for email support are now clearly labeled as such

Also, from the first comment in the post:

> Unfortunately, this is an incorrect response from a front-line AI support bot.

Well, this actually hurts.. a lot! I believe one of the key pillars of making a great company is customer support, which represents the soul or the human part of the company.

homefree|10 months ago

Thank for the details and replying here!

Don’t let the dickish replies get to you.

make3|10 months ago

Support emails shouldn't be AI. It's just so annoying. Put a human in the loop at least. This is a paying service, not a massive ad supported thing.

SCdF|10 months ago

> * Any AI responses used for email support are now clearly labeled as such. We use AI-assisted responses as the first filter for email support.

Don't use AI. Actually care. Like, take a step back, and realise you should give a shit about support for a paid product.

Don't get me wrong: AI is a very effective tool, *for doing things you don't care about*. I had to do a random docker compose change the the other day. It's not production code, it will be very obvious whether or not AI output works, and I very rarely touch docker and don't care to become a super expert in it. So I prompted the change, and it was good enough and so I ran with it.

You using AI for support tells me that you don't care about support. Which tells me whether or not I should be your customer.

petesergeant|10 months ago

There’s AI and there’s “AI”, and this whole drama would have been avoided by returning links to an FAQ rather found using embedding search rather than actually then trying to turn it into a textual answer, which — working with these systems all day — is madness

thih9|10 months ago

> Don't use AI. Actually care.

I agree with this. Also, whenever I care about code, I don’t use AI. So I very rarely use AI assistants for coding.

I guess this is why Cursor is interested in making AI assistants popular everywhere, they don’t want the association that “AI assisted” means careless. Even when it does, at least with today’s level of AI.

throwawaysleep|10 months ago

The amount paid is still pretty trivial. I wouldn’t expect much human support for most SaaS products costing $20 a month.

charlietango592|10 months ago

Not trying to defend them, but I think it’s a problem of scaling up. The user base grew very quickly and keeping up with the support inquiries must be a tough job. Therefore the first like of defense is AI support replies.

I agree with you, they should care.

mindwok|10 months ago

They’re like a team of 10 people with thousands, if not hundreds of thousands of users. “Actually care” is not a viable path to success here.

nkrisc|10 months ago

> Any AI responses used for email support are now clearly labeled as such. We use AI-assisted responses as the first filter for email support.

And what’s a customer supposed to do with that information? Know that they can’t trust it? What’s the point then?

mrheosuper|10 months ago

Does your codebase use LLM ?

SpanishBrowne|10 months ago

cofounder or another bot stringing letters together?

kklt92|10 months ago

[deleted]

geuis|10 months ago

Or you could hire real people to actually answer real customer issues. Just an idea.