(no title)
mntruell | 10 months ago
Apologies - something very clearly went wrong here. We’ve already begun investigating, and some very early results:
* Any AI responses used for email support are now clearly labeled as such. We use AI-assisted responses as the first filter for email support.
* We’ve made sure this user is completely refunded - least we can do for the trouble.
For context, this user’s complaint was the result of a race condition that appears on very slow internet connections. The race leads to a bunch of unneeded sessions being created which crowds out the real sessions. We’ve rolled out a fix.
Appreciate all the feedback. Will help improve the experience for future users.
nextaccountic|10 months ago
https://old.reddit.com/r/cursor/comments/1jyy5am/psa_cursor_...
(For reference, here it is in reveddit https://www.reveddit.com/v/cursor/comments/1jyy5am/psa_curso... - text from post was unfortunately not saved)
It's already locked and with a stickied comment from a dev clarifying what happened
Did you remove it so people can't find about this screwup when searching Google?
Anyway, if you acknowledge it was a mistake to remove the thread, could you please un-remove it?
PrayagS|10 months ago
AyyEye|10 months ago
The best case scenario is that you lied about having people answer support. LLMs pretending to be people (you named it Sam!) and not labeled as such is clearly intended to be deceptive. Then you tried to control the narrative on reddit. So forgive me if I hit that big red DOUBT button.
Even in your post you call it "AI-assisted responses" which is as weaselly as it gets. Was it a chatbot response or was a human involved?
But 'a chatbot messed up' doesn't explain how users got locked out in the first place. EDIT: I see your comment about the race condition now. Plausible but questionable.
So the other possible scenario is that you tried to hose your paying customers then when you saw the blowback blamed it on a bot.
'We missed the mark' is such a trope non-apology. Write a better one.
I had originally ended this post with "get real" but your company's entire goal is to replace the real with the simulated so I guess "you get what you had coming". Maybe let your chatbots write more crap code that your fake software engineers push to paying customers that then get ignored and/or lied to when they ask your chatbots for help. Or just lie to everyone when you see blowback. Whatever. Not my problem yet because I can write code well enough that I'm embarrassed for my entire industry whenever I see the output from tools like yours.
This whole "AI" psyop is morally bankrupt and the world would be better off without it.
PoignardAzur|10 months ago
Also, illegal in the EU.
jackaroe420|10 months ago
Azeralthefallen|10 months ago
We spent almost 2 months fighting with you guys about basic questions any B2B SaaS should be able to answer us. Things such as invoicing, contracts, and security policies. This was for a low 6 figure MRR deal.
When your sales rep responds "I don't know" or "I will need to get back to you" for weeks about basic questions it left us with a massive disappointment. Please do better, however we have moved to Copilot.
dspillett|10 months ago
Because we all know how well people pay attention to such clear labels, even seasoned devs not just “end users”⁰.
Also, deleting public view of the issue (locking & hiding the reddit thread) tells me a lot about how much I should trust the company and its products, and as such I will continue to not use them.
--------
[0] though here there the end users are devs
Snakes3727|10 months ago
This person is not the only one to experiencing this bug. As this thread has pointed out.
KennyBlanken|10 months ago
HN goes a step further. It has a function that allows moderators to kill or boost a post by subtracting or adding a large amount to the post's score. HN is primarily a place for Y Combinator to hype their latest venture, and a "safe" place for other startups and tech companies.
patcon|10 months ago
They will utterly fail to build for a community of users if they don't have anyone on-hand who can tell them what a terrible idea that was
To the cofounder: hire someone (ideally with some thoughtful reluctance around AI, who understands what's potentially lost in using it) who will tell you your ideas around this are terrible. Hire this person before you fuck up your position in benevolent leadership of this new field
petesergeant|10 months ago
slotrans|10 months ago
Literally no one wants this. The entire purpose of contacting support is to get help from a human.
fragmede|10 months ago
hartator|10 months ago
Seems like you are still blaming the user for his “very slow internet”.
How do you know the user internet was slow? Couldn’t a race condition like this exist anyway with regular 2 fast internet connections competing for the same sessions?
Something doesn’t add up.
mritchie712|10 months ago
this is a completely reasonable and seemingly quite transparent explaination.
if you want a conspiracy, there are better places to look.
eranation|10 months ago
geuis|10 months ago
hakaneskici|10 months ago
Slightly related to this; I just wanted to ask whether all Cursor email inboxes are gated by AI agents? I've tried to contact Cursor via email a few times in the past, but haven't even received an AI response :)
Cheers!
mntruell|10 months ago
adenta|10 months ago
Edit: he did refund 22 mins after seeing this
krzat|10 months ago
PoignardAzur|10 months ago
makingstuffs|10 months ago
PUSH_AX|10 months ago
ach9l|10 months ago
Ukv|10 months ago
That an LLM then invented a reason when asked by users why they're being logged out isn't that surprising. While not impossible, I don't think there's currently indication that they intended to change policy and are just blaming it on a hallucination as a scape goat.
ph4evers|10 months ago
redbell|10 months ago
Also, from the first comment in the post:
> Unfortunately, this is an incorrect response from a front-line AI support bot.
Well, this actually hurts.. a lot! I believe one of the key pillars of making a great company is customer support, which represents the soul or the human part of the company.
homefree|10 months ago
Don’t let the dickish replies get to you.
make3|10 months ago
SCdF|10 months ago
Don't use AI. Actually care. Like, take a step back, and realise you should give a shit about support for a paid product.
Don't get me wrong: AI is a very effective tool, *for doing things you don't care about*. I had to do a random docker compose change the the other day. It's not production code, it will be very obvious whether or not AI output works, and I very rarely touch docker and don't care to become a super expert in it. So I prompted the change, and it was good enough and so I ran with it.
You using AI for support tells me that you don't care about support. Which tells me whether or not I should be your customer.
petesergeant|10 months ago
thih9|10 months ago
I agree with this. Also, whenever I care about code, I don’t use AI. So I very rarely use AI assistants for coding.
I guess this is why Cursor is interested in making AI assistants popular everywhere, they don’t want the association that “AI assisted” means careless. Even when it does, at least with today’s level of AI.
throwawaysleep|10 months ago
charlietango592|10 months ago
I agree with you, they should care.
mindwok|10 months ago
nkrisc|10 months ago
And what’s a customer supposed to do with that information? Know that they can’t trust it? What’s the point then?
mrheosuper|10 months ago
SpanishBrowne|10 months ago
jacobsenscott|10 months ago
[deleted]
kklt92|10 months ago
[deleted]
geuis|10 months ago