top | item 46465513

Vibe Coding Killed Cursor

53 points| hiddenseal | 1 month ago |ischemist.com

48 comments

order

leerob|1 month ago

Hi. I'm an engineer at Cursor.

> By prioritizing the vibe coding use case, Cursor made itself unusable for full-time SWEs.

This has actually been the opposite direction we're building for. If you are just vibing, building prototypes or throwaway code or whatever, then you don't even need to use an IDE or look at the code. That doesn't really make sense for most people, which is why Cursor has different levels of autonomy you can use it for. Write the code manually, or just autocomplete assistance, or use the agent with guardrails - or use the agent in yolo mode.

> One way to achieve that would be to limit the number of lines seen by an LLM in a single read: read first 100 lines

Cursor uses shell commands like `grep` and `ripgrep`, similar to other coding agents, as well as semantic search (by indexing the codebase). The agent has only been around for a year (pretty wild how fast things have moved) and 8 months or so ago, when models weren't as good, you had to be more careful about how much context you let the agent read. For example, not immediately putting a massive file into the context window and blowing it up. This is basically a solved problem today, more or less, as models and agents are much better are reliably calling tools and only pulling in relevant bits, in Cursor and elsewhere.

> Try to write a prompt in build mode, and then separately first run it in plan mode before switching to build mode. The difference will be night and day.

Agree. Cursor has plan mode, and I generally recommend everyone start with a plan before building anything of significance. Much higher quality context and results.

> Very careful with asking the models to write tests or fix code when some of those tests are failing. If the problem is not trivial, and the model reaches the innate context limit, it might just comment out certain assertions to ensure the test passes.

Agree you have to be careful, but with the latest models (Codex Max / Opus 4.5) this is becoming less of a problem. They're much better now. Starting with TDD actually helps quite a bit.

hiddenseal|1 month ago

Hello Lee, incredibly honored, huge fan of your work at vercel. The nextjs tutorial is a remarkable s-tier educational content; it helped me kickstart my journey into full-stack dev to ship my research tools (you might appreciate the app router love in my latest project: https://github.com/ischemist/syntharena).

On substance: my critique is less about the quality of the retrieval tools (ripgrep/semantic search are great) and more about the epistemic limits of search. An agent only sees what its query retrieves. For complex architectural changes, the most critical file might be one that shares no keywords with the task but contains a structural pattern that must be mirrored. In those cases, tunnel vision isn't a bug in the search tool but in the concept of search vs. full-context reasoning.

One other friction point I hit before churning was what felt like prompt-level regression to the mean. For trivial changes, the agent would sometimes spin up a full planning phase, creating todo lists and implementation strategies for what should have been a one-shot diff. It felt like a guardrail designed for users who don't know how to decompose tasks, ergo the conclusion about emphasis on vibe coders.

That said, Cursor moves fast, and I'll be curious to see what solution you'll come up with to the unknown unknown dependency problem!

noo_u|1 month ago

"You should remain in charge, and best way to do that is to either not use agentic workflows at all (just talk to Gemini 2.5/3 Pro in AI Studio) or use OpenCode, which is like Claude Code, but it shows you all the code changes in git diff format, and I honestly can't understand how anyone would settle for anything else."

I 100% agree with the author here. Most of the "LLMs are slowing me down/are trash/etc" discussions I've had at work usually come from people who are not great developers to begin with - they end up tangled into a net of barely vetted code that was generated for them.

rootusrootus|1 month ago

> Most of the "LLMs are slowing me down/are trash/etc" discussions I've had at work usually come from people who are not great developers to begin with

This seems to be something both sides of the debate agree on: Their opponents are wrong because they are subpar developers.

It seems uncharitable to me in both cases, and of course it is a textbook example of an ad hominem fallacy.

phpnode|1 month ago

I think it’s actually a combination of people who have seen bad results from ai code generation (and have not looked deeper or figured out how to wield it properly yet) and another segment of the developer population who are now feeling threatened because it’s doing stuff they can’t do. Different groups

trinix912|1 month ago

> Most of the "LLMs are slowing me down/are trash/etc" discussions I've had at work usually come from people who are not great developers to begin with - they end up tangled into a net of barely vetted code that was generated for them.

This might be your anecdotal experience but in mine, reviewing large diffs of (unvetted agent-written) code is usually not much faster than writing it yourself (especially when you have some mileage in the codebase), nor does it offset the mental burden of thinking how things interconnect and what the side effects might be.

What IMO moves the needle towards slower is that you have to steer the robot (often back and forth to keep it from undoing its own previous changes). You can say it's bad prompting but there's no guarantee that a certain prompt will yield the desired results.

abhgh|1 month ago

I use Claude Code within Pycharm and I see the git diff format for changes there.

EDIT: It shows the side-by-side view by default, but it is easy to toggle to a unified view. There's probably a way to permanently set this somewhere.

eulers_secret|1 month ago

This is a part of why I (sometimes, depending) still use Aider. It’s a more manual AI coding process.

I also like how it uses git, and it’s good at using less context (tool calling eats context like crazy!)

petesergeant|1 month ago

> which is like Claude Code, but it shows you all the code changes in git diff format

Claude Code does this, you just have to not click “Yes and accept all changes”

throw310822|1 month ago

Not sure, after reading so many times that Cursor was cooked, I got a license from my company and I'm loving it. I had tried Claude Code before, though only briefly and for small things; I don't really see much difference between one and the other. Cursor (Opus 4.5) has been able to perform complex changes across multiple files, implement whole new features, fix issues in code and project setup... I mean, it just feels like peer programming, and I never got the feeling of running into hard limits. Am I missing much, or Cursor has simply improved recently (or it depends on the license)?

esafak|1 month ago

People are realizing that you don't need Cursor to review the diffs CC generates; any tool will do!

tcdent|1 month ago

This is a fairly well written article which captures the current state of the art correctly.

And then goes on to recommend AI Studio is a primary dev tool?! Baffling.

esafak|1 month ago

There is a rationale:

> Second, and no less important, AI Studio is genuinely the best chat interface on the market. It was the first platform where you could edit any message in the conversation, not just the last one, and I think it's still the only platform where you can edit AI responses as well! So if the model goes on an unnecessary tangent, you can just remove it from the context. It's still the only platform where if you have a long conversation like R(equest)1, O(utput)1, R2, O2, R3, O3, R4, O4, R5, O5, you can click regenerate on R3 and it will only regenerate O3, keeping R4 and all subsequent messages intact.

hoppp|1 month ago

Im sceptical of these google made Ai builders, I just had a bad experience with firebase studio that was stuck on a vulnerable version of nextjs and gemini couldn't update it to a non vulnerable version properly. Its tries to force vendor lock in from the start. Guh.. avoid.

margalabargala|1 month ago

It's advertising for AI studio, masquerading as an insightful article.

manishsharan|1 month ago

Gemini's large context window is incredible. I concatenate the my entire repo and repos of supporting libraries and then ask it questions.

My last use case was like this : I had a old codebase code that was using bakbone.js for ui with jquery and a bunch of old js with little documentation to generat UI for a clojure web application.

Gemini was able to unravel this hairball of code and guiding me step by step to htmx. I am not using AI studio; I am using Gemini subscription.

Since I manually patch the code, its like pair programming with an incredibly patient and smart programmer.

For the record, I am too old for vibe coding .. I like to maintain total control over my code and all the abstractions and logic.

boredtofears|1 month ago

This article makes a lot of definitive claims about capabilities of different models that don't align with my experience with them. Its hard to take any claim serious without completely understanding the state of the context when the behavior was observed. I don't think its useful to extrapolate a single observation into generalized knowledge about a particular model.

Can't wait until we have useful heuristics for comparing LLM's. This is a problem that comes up constantly (especially in HN comments...)

weakfish|1 month ago

Maybe my job is just too easy, but all the hoops that folks jump through to get the magic oracle to do the thing takes longer than if I just Did The Thing

poisonborz|1 month ago

Comments like this are not much worth without context. Each model has wildly different perf per each language and framework, project architecture (if it can be followed up successfully). No two devs on different projects have the same experience. Even insights like "Anthropic has a lead" is a broad generalization.

hbogert|1 month ago

Or maybe it's too difficult? Or maybe you are just holding it wrong

The unpredictability for using things like cursor or Claude code is just a showstopper and indeed I'm not sure it has ever saved me time in the grand scheme of things

jemmyw|1 month ago

I've been using Claude code and cursor in a similar way for different projects and the results are very similar. At some point with Claude code I need to switch to CC + VSCode because I need to have more understanding of the code and start getting involved, at which point I prefer cursor because it's integrated at the start.

I haven't tried AI studio as the article suggests. I might give it a go. Last time I tried the Google models although there had larger context they still didn't code as well as anthropic models.

submeta|1 month ago

> The context is king

Agree

> and AI Studio is the only serious product for human-in-the-loop SWE

Disagree. I use Claude Code and Codex daily, and I couldn’t be happier. Had started with Cursor, switched to CLI based agents and never looked back. I use WezTerm, tmux, neovim, Zoxide, and create several tabs and panes and run claude code not only for vibe coding, scripting, analysing files, letting it write concepts, texts, documentation. Totally different kind of computing experience. As if I have several assistants 24/7 at my fingertips.

moralestapia|1 month ago

+1 to Codex.

I was always hesitant to jump into the vibe coding buzz.

A month ago I tried Codex w/ CLI agents and they now take care of all the menial tasks I used to hate that come w/ coding.

samuelknight|1 month ago

These complaints are about technical limitations that will go away for codebase-sized problems as inference cost continues its collapse and context windows grow.

There are literally hundreds of engineering improvements that we will see along the way like a intelligent replacement to compacting to deal with diff explosion, more raw memory availability and dedicated inference hardware, models that can actually handle >1M context windows without attention loss, and so on.

Havoc|1 month ago

> ask it explicitly to separate the implementation plan in phases

This has made a big difference my side. prompt.md that is mostly very natural language markdown. Then ask LLM to turn that into a plan.md that contains phases emphasising that each should be fairly selfcontained. This usually needs some editing but is mostly fine. And then just have it implement each phase one by one.

corruptK|1 month ago

This article is a joke. What a giant waste of time I'll never get back. I had to create an account just to say this...

chollida1|1 month ago

This seems like I just read an advertisement. Or submarine article as PG would say.

AI studio is just another IDE like cursor so its a very odd choice to say one is bad and the other is the holy grail:)

But I guess this is what guerilla advertising is these days.

  Just another random account with 8 karma points that just happens to post an article about how one IDE is bad and its almost identical cousin is the best

walthamstow|1 month ago

AI Studio isn't an IDE, it's just a web page with a chat interface. It's not even a product, really.

OP is actually advocating against Google's latest products here. Surely a submarine would hype Antigravity and Gemini 3 Pro instead?

Havoc|1 month ago

> AI studio is just another IDE like cursor so its a very odd choice to say one is bad and the other is the holy grail:)

Google does tend to have large contexts and sometimes reasonable prices for it. So if one of the main takeaway is load everything into context then I can certainly understand why author is a fan

hiddenseal|1 month ago

lmao, per Occam's razor a much simpler explanation - I'm a grad student, so of course I'll spend more time exploring free tools, and it just happened that AI Studio with Gemini is really great.

if google wants to send a check, my email is open, lmao, but for now i'm optimizing for tokens per dollar

corruptK|1 month ago

Lol... a grad student talking about copy pasting full code two years after real SWE were already doing this is comical. Good job catching up to how LLMs work... this article is a fucking joke (or just an idiot discovering things and thinking they are now a genius)

maxdo|1 month ago

looks like a very ignorant,"I like to do it my way" article. Cursor literally allows you to do everything author said it can't. including doing commits to git etc.

Gemini 2.5 pro is no match at all to Opus 4.5 in Max mode, you can argue of latest gen , gemini 3 pro, or GPT 5.2 or something else, but not gemini 2.5.

it also has a quick model that helps with manual edit.

Copy paste from chat windows is so ... 2023. It's such a loss of productivity regardless of what you believe. Cursor gives you ability to see the changes, edit changes all using powerful GUI, no matter if you like agentic ai or not.