top | item 43959710

Ask HN: Cursor or Windsurf?

316 points| skarat | 9 months ago

Things are changing so fast with these vscode forks I m barely able to keep up. Which one are you guys using currently? How does the autocomplete etc, compare between the two?

398 comments

order

Some comments were deferred for faster rendering.

danpalmer|9 months ago

Zed. They've upped their game in the AI integration and so far it's the best one I've seen (external from work). Cursor and VSCode+Copilot always felt slow and janky, Zed is much less janky feels like pretty mature software, and I can just plug in my Gemini API key and use that for free/cheap instead of paying for the editor's own integration.

vimota|9 months ago

I gave Zed an in-depth trial this week and wrote about it here: https://x.com/vimota/status/1921270079054049476

Overall Zed is super nice and opposite of janky, but still found a few of defaults were off and Python support still was missing in a few key ways for my daily workflow.

submeta|9 months ago

Consumes lots of resources on an M4 Macbook. Would love to test it though. If it didn’t freeze my Macbook.

Edit:

With the latest update to 0.185.15 it works perfectly smooth. Excellent addition to my setup.

xmorse|9 months ago

I am using Zed too, it still has some issues but it is comparable to Cursor. In my opinion they iterate even faster than the VSCode forks.

charlie0|9 months ago

Why are the Zeds guys so hung up on UI rendering times....? I don't care that the UI can render at 120FPS if it takes 3 seconds to get input from an LLM. I do like the clean UI though.

allie1|9 months ago

I just wish they'd release a debugger already. Once its done i'll be moving to them completely.

frainfreeze|9 months ago

Zed doesn't even run on my system and the relevant github issue is only updated by people who come to complain about the same issue.

wellthisisgreat|9 months ago

Does it have Cursor’s “tab” feature?

nlh|9 months ago

I use Cursor as my base editor + Cline as my main agentic tool. I have not tried Windsurf so alas I can't comment here but the Cursor + Cline combo works brilliantly for me:

* Cursor's Cmk-K edit-inline feature (with Claude 3.7 as my base model there) works brilliantly for "I just need this one line/method fixed/improved"

* Cursor's tab-complete (neé SuperMaven) is great and better than any other I've used.

* Cline w/ Gemini 2.5 is absolutely the best I've tried when it comes to full agentic workflow. I throw a paragraph of idea at it and it comes up with a totally workable and working plan & implementation

Fundamentally, and this may be my issue to get over and not actually real, I like that Cline is a bring-your-own-API-key system and an open source project, because their incentives are to generate the best prompt, max out the context, and get the best results (because everyone working on it wants it to work well). Cursor's incentive is to get you the best results....within their budget (of $.05 per request for the max models and within your monthly spend/usage allotment for the others). That means they're going to try to trim context or drop things or do other clever/fancy cost saving techniques for Cursor, Inc.. That's at odds with getting the best results, even if it only provides minor friction.

machtiani-chat|9 months ago

Just use codex and machtiani (mct). Both are open source. Machtiani was open sourced today. Mct can find context in a hay stack, and it’s efficient with tokens. Its embeddings are locally generated because of its hybrid indexing and localization strategy. No file chunking. No internet, if you want to be hardcore. Use any inference provider, even local. The demo video shows solving an issue VSCode codebase (of 133,000 commits and over 8000 files) with only Qwen 2.5 coder 7B. But you can use anything you want, like Claude 3.7. I never max out context in my prompts - not even close.

https://github.com/tursomari/machtiani

richardreeze|9 months ago

How much do you (roughly, per month) pay for Gemini's API? That's my main concern with switching to "bring your own API keys" tools.

abhinavsharma|9 months ago

Totally agree on aligning with the one with clearest incentives here

masterjack|9 months ago

I also like Cline since it being open source means that while I’m using it I can see the prompts and tools and thus learn how to build better agents.

pj_mukh|9 months ago

Clines agent work is better than Cursors own?

fastball|9 months ago

For the agentic stuff I think every solution can be hit or miss. I've tried claude code, aider, cline, cursor, zed, roo, windsurf, etc. To me it is more about using the right models for the job, which is also constantly in flux because the big players are constantly updating their models and sometimes that is good and sometimes that is bad.

But I daily drive Cursor because the main LLM feature I use is tab-complete, and here Cursor blows the competition out of the water. It understands what I want to do next about 95% of the time when I'm in the middle of something, including comprehensive multi-line/multi-file changes. Github Copilot, Zed, Windsurf, and Cody aren't at the same level imo.

solumunus|9 months ago

If we’re talking purely auto complete I think Supermaven does it the best.

joelthelion|9 months ago

Aider! Use the editor of your choice and leave your coding assistant separate. Plus, it's open source and will stay like this, so no risk to see it suddenly become expensive or dissappear.

mbanerjeepalmer|9 months ago

I used to be religiously pro-Aider. But after a while those little frictions flicking backwards and forwards between the terminal and VS Code, and adding and dropping from the context myself, have worn down my appetite to use it. The `--watch` mode is a neat solution but harms performance. The LLM gets distracted by deleting its own comment.

Roo is less solid but better-integrated.

Hopefully I'll switch back soon.

Oreb|9 months ago

Approximately how much does it cost in practice to use Aider? My understanding is that Aider itself is free, but you have to pay per token when using an API key for your LLM of choice. I can look up for myself the prices of the various LLMs, but it doesn't help much, since I have no intuition whatsoever about how many tokens I am likely to consume. The attraction of something like Zed or Cursor for me is that I just have a fixed monthly cost to worry about. I'd love to try Aider, as I suspect it suits my style of work better, but without having any idea how much it would cost me, I'm afraid of trying.

aitchnyu|9 months ago

Yup, choose your model and pay as you go, like commodities like rice and water. The others played games with me to minimize context and use cheaper models (such as 3 modes, daily credits etc, using most expensive model etc).

Also the --watch mode is the most productive interface of using your editor, no need of extra textboxes with robot faces.

jbellis|9 months ago

I love Aider, but I got frustrated with its limitations and ended up creating Brokk to solve them: https://brokk.ai/

Compared to Aider, Brokk

- Has a GUI (I know, tough sell for Aider users but it really does help when managing complex projects)

- Builds on a real static analysis engine so its equivalent to the repomap doesn't get hopelessly confused in large codebases

- Has extremely useful git integration (view git log, right click to capture context into the workspace)

- Is also OSS and supports BYOK

I'd love to hear what you think!

benterix|9 months ago

For daily work - neither. They basically promote the style of work where you end up with mediocre code that you don't fully understand, and with time the situation gets worse.

I get much better result by asking specific question to a model that has huge context (Gemini) and analyzing the generated code carefully. That's the opposite of the style of work you get with Cursor or Windsurf.

Is it less efficient? If you are paid by LoCs, sure. But for me the quality and long-term maintainability are far more important. And especially the Tab autocomplete feature was driving me nuts, being wrong roughly half of the time and basically just interrupting my flow.

mark_l_watson|9 months ago

I agree! I like local tools, mostly, use Gemini 2.5 Pro when actually needed and useful, and do a lot of manual coding.

scottmas|9 months ago

But how do you dump your entire code base into Gemini? Literally all I want is a good model with my entire code base in its context window.

pembrook|9 months ago

For a time windsurf was way ahead of cursor in full agentic coding, but now I hear cursor has caught up. I have yet to switch back to try out cursor again but starting to get frustrated with Windsurf being restricted to gathering context only 100-200 lines at a time.

So many of the bugs and poor results that it can introduce are simply due to improper context. When forcibly giving it the necessary context you can clearly see it’s not a model problem but it’s a problem with the approach of gathering disparate 100 line snippets at a time.

Also, it struggles with files over 800ish lines which is extremely annoying

We need some smart deepseek-like innovation in context gathering since the hardware and cost of tokens is the real bottleneck here.

evolve2k|9 months ago

Wait, are these 800 lines of code? Am I the only one seeing that as a major code smell? Assuming these are code files, the issue is not AI processing power but rather bread and butter coding practices related to file organisation and modularisation.

falleng0d|9 months ago

you can use the filesystem mcp and have it use the read file tool to read the files in full on call

erenst|9 months ago

I’ve been using Zed Agent with GitHub Copilot’s models, but with GitHub planning to limit usage, I’m exploring alternatives.

Now I'm testing Claude Code’s $100 Max plan. It feels like magic - editing code and fixing compile errors until it builds. The downside is I’m reviewing the code a lot less since I just let the agent run.

So far, I’ve only tried it on vibe coding game development, where every model I’ve tested struggles. It says “I rewrote X to be more robust and fixed the bug you mentioned,” yet the bug still remains.

I suspect it will work better for backend web development I do for work: write a failing unit test, then ask the agent to implement the feature and make the test pass.

Also, give Zed’s Edit Predictions a try. When refactoring, I often just keep hitting Tab to accept suggestions throughout the file.

energy123|9 months ago

Can you say more to reconcile "It feels like magic" with "every model I’ve tested struggles."?

seabass|9 months ago

Zed's agentic editing with Claude 3.7 + thinking does what you're describing testing out with the $100 Claude Code tool. Why leave the Zed editor and pay more to do something you can run for free/cheap within it instead?

victorbjorklund|9 months ago

I'm with Cursor for the simple reason it is in practice unlimited. Honestly the slow requests after 500 per month are fast enough. Will I stay with Cursor? No, ill switch the second something better comes along.

mdrzn|9 months ago

Same. Love the "slow but free" model, I hope they can continue providing it, I love paying only $20/m instead of having a pay by usage.

I've been building SO MANY small apps and web apps in the latest months, best $20/m ever spent.

xiphias2|9 months ago

I'm cursor with claude 3.7

Somehow other models don't work as well with it. ,,auto'' is the worst.

Still, I hate it when it deletes all my unit tests to ,,make them pass''

geor9e|9 months ago

I wish it was unlimited for me. I got 500 fast requests, about 500 slow requests, then at some point it started some kind of exponential backoff, and became unbearably slow. 60+ second hangs with every prompt, at least, sometimes 5 minutes. I used that period to try out windsurf, vscode copilot, etc and found they weren't as good. Finally the month refreshed and I'm back to fast requests. I'm hoping they get the capacity to actually become usably unlimited.

rvnx|9 months ago

Cursor is acceptable because for the price it's unbeatable. Free, unlimited requests are great. But by itself, Cursor is not anything special. It's only interesting because they pay Claude or Gemini from their pockets.

Ideally, things like RooCode + Claude are much better, but you need infinite money glitch.

herbst|9 months ago

On weekend the slow requests regularly are faster than the paid requests.

SafeDusk|9 months ago

I am betting on myself.

I built a minimal agentic framework (with editing capability) that works for a lot of my tasks with just seven tools: read, write, diff, browse, command, ask and think.

One thing I'm proud of is the ability to have it be more proactive in making changes and taking next action by just disabling the `ask` tool.

I won't say it is better than any of the VSCode forks, but it works for 70% of my tasks in an understandable manner. As for the remaining stuff, I can always use Cursor/Windsurf in a complementary manner.

It is open, have a look at https://github.com/aperoc/toolkami if it interests you.

recov|9 months ago

Nearly all of your comments have been self promo, I would chill out a bit

alentred|9 months ago

Recently, Augment Code. But more generally, the "leader" switches so frequently at this point, I don't commit to use either and switch more or less freely from one to another. It helps to have monthly subscriptions and free cancellation policy.

I expect, or hope for, more stability in the future, but so far, from aider to Copilot, to Claude Code, to Cursor/Windsurf/Augment, almost all of them improve (or at least change) fast and seem to borrow ideas from each other too, so any leader is temporary.

killerstorm|9 months ago

Cursor: Autocomplete is really good. At a time when I compared them, it was without a doubt better than Githib Copilot autocomplete. Cmd-K - insert/edit snippet at cursor - is good when you use good old Sonnet 3.5. ;;; Agent mode, is, honestly, quite disappointing; it doesn't feel like they put a lot of thought into prompting and wrapping LLM calls. Sometimes it just fails to submit code changes. Which is especially bad as they charge you for every request. Also I think they over-charge for Gemini, and Gemini integration is especially poor.

My reference for agent mode is Claude Code. It's far from perfect, but it uses sub-tasks and summarization using smaller haiku model. That feels way more like a coherent solution compared to Cursor. Also Aider ain't bad when you're OK with more manual process.

Windsurf: Have only used it briefly, but agent mode seems somewhat better thought out. For example, they present possible next steps as buttons. Some reviews say it's even more expensive than Cursor in agent mode.

killerstorm|9 months ago

Also something to consider: I have a script I wrote myself which just feeds selected files as a context to LLM and then either writes a response to the stdout, or extracts a file out of it.

That seems to be often better than using Cursor. I don't really understand why it calls tools when I selected entire file to be used a context - tool calls seem to be unnecessary distraction in this case, making calls more expensive. Also Gemini less neurotic when I use it with very basic prompts -- either Cursor prompts make it worse, or the need to juggle tool calls distract it from calls.

bitbasher|9 months ago

Sometimes I feel like I'm the only one sitting here with vim enjoying myself. Letting this whole AI wave float away.

aiaiaiaiaiaiai|9 months ago

I love vim and but I am playing with this stuff too...

There are a couple of neovim projects that allow this ... Advante come to mind right now.

I will say this: it is a different thought process to get an llm to write code for you. And right now, the biggest issue for me is the interface. It is wrong some how, my attention not being directed to the most important part of what is going on....

matsemann|9 months ago

I don't mind having to learn these new tools, but I don't see any drawbacks in waiting a year or more until it stabilizes.

Same as in the crazy times of frontend libraries when it was a new one every week. Just don't jump on anything, and learn the winner in the end.

Sure, I may not be state of the art. But I can pick up whatever fast. Let someone else do all the experiments.

oakpond|9 months ago

You're not the only one. LLMs are hardly intelligent anyway.

eisfresser|9 months ago

Windsurf at the moment. It now can run multiple "flows" in parallel, so I can set one cascade off to look into a bug somewhere while another cascade implements a feature elswhere in the code base. The LLMs spit out their tokens in the background, I drop in eventually to reveiew and accept or ask for further changes.

ximeng|9 months ago

Cursor offers this too - open different tabs in chat and ask for different changes; they’ll run in parallel.

Alifatisk|9 months ago

We are truly living in the future

dkaleta|9 months ago

Since this topic is closely related to my new project, I’d love to hear your opinion on it.

I’m thinking of building an AI IDE that helps engineers write production quality code quickly when working with AI. The core idea is to introduce a new kind of collaboration workflow.

You start with the same kind of prompt, like “I want to build this feature...”, but instead of the model making changes right away, it proposes an architecture for what it plans to do, shown from a bird’s-eye view in the 2D canvas.

You collaborate with the AI on this architecture to ensure everything is built the way you want. You’re setting up data flows, structure, and validation checks. Once you’re satisfied with the design, you hit play, and the model writes the code.

Website (in progress): https://skylinevision.ai

YC Video showing prototype that I just finished yesterday: https://www.youtube.com/watch?v=DXlHNJPQRtk

Karpathy’s post that talks about this: https://x.com/karpathy/status/1917920257257459899

Thoughts? Do you think this workflow has a chance of being adopted?

rkuodys|9 months ago

I quite liked the video. Hope you get to launch the product and I could try it out some day.

The only thing that I kept thinking about was - if there is a correction needed- you have to make it fully by hand. Find everything and map. However, if the first try was way off , I would like to enter from "midpoint" a correction that I want. So instead of fixing 50%, I would be left with maybe 10 or 20. Don't know if you get what I mean.

michuk|9 months ago

Looks like an antidote for "vibe coding", like it. When are you planning to release something that could be tried? Is this open source?

rancar2|9 months ago

The video was a good intro to the concept. As long as it has repeatable memory for the corrections shown in the video, then the answer to your question about being adopted is “yes!”

kstenerud|9 months ago

It looks interesting, but I couldn't really follow what you were doing in the video or why. And then just as you were about to build, the video ends?

ciaranmca|9 months ago

Just watched the demo video and thought it is a very interesting approach to development, I will definitely be following this project. Good Luck.

reynaldi|9 months ago

VS Code with GitHub Copilot works great, though they are usually a little late to add features compared to Cursor or Windsurf. I use the 'Edit' feature the most.

Windsurf I think has more features, but I find it slower compared to others.

Cursor is pretty fast, and I like how it automatically suggests completion even when moving my cursor to a line of code. (Unlike others where you need to 'trigger' it by typing a text first)

Honorable mention: Supermaven. It was the first and fastest AI autocomplete I used. But it's no longer updated since they were acquired by Cursor.

lemontheme|9 months ago

OP probably means to keep using vscode. Honestly, best thing you can do is just try each for a few weeks. Feature comparison tables only say so much, particularly because the terminology is still in a state of flux.

I’ve personally never felt at home in vscode. If you’re open to switching, definitely check out Zed, as others are suggesting.

heymax054|9 months ago

90% of their features could fit inside a VS Code extension.

There are already a few popular open-source extension doing 90%+ of what Cursor is doing - Cline, Roo Code (a fork of Cline), Kilo Code (a fork of Roo Code and something I help maintain).

wrasee|9 months ago

The other 10% being what differentiates them in the market :)

Tokumei-no-hito|9 months ago

I’m curious what the motivation is for all these sub-forks. why not just upstream to cline?

outcoldman|9 months ago

I really like Zed. Have not tried any of the mentioned by op. Zed I feel like is getting somewhere that can replace Sublime Text completely (but not there yet).

bilekas|9 months ago

Zed is an editor firslty.. The Ops has mentioned options which are AI development "agents" basically.

chrisvalleybay|9 months ago

Cursor has for me had the best UX and results until now. Trae's way of adding context is way too annoying. Windsurf has minor UI-issues all over. Options that are extensions in VSCode do not cut it in turn of providing fantastic UI/UX because of the API not supporting it.

dexterlagan|9 months ago

You need none of these fancy tools if you iterate over specs instead of iterating over code. I explain it all in here: https://www.cleverthinkingsoftware.com/spec-first-developmen...

WA|9 months ago

I played around with your suggestion for a day or two now. While I'm intrigued, there are some real-world issues with this approach:

- The same spec is processed by the same LLM differently when implementing from scratch. This can maybe mitigated somewhat by adjusting the temperature slider. But generally speaking, the same spec won't give the same result unless you are very specific.

- Same if you use different LLMs. The same spec can give entirely different results for different LLMs.

- This can probably mitigated somewhat by getting more specific in the spec, but at some point, it is so specific as being the code itself. Unless of course you don't care that much about the details. But if you don't, you get a slightly different app every time you implement from scratch.

- Gemini 2.5 pro has "reasoning" capabilities and introduces a lot of "thinking" tokens into the context. Let's say you start with a single line spec and iterate from there. Gemini will give you a more detailed spec based on its thinking process. But if you then take the new thinking-process spec as a new starting point for the next iteration of the spec, you get even more thinking. In short, the spec gets automatically expanded by the way of "thinking" with reasoning models.

- Produced code can have small bugs, but they are not really worth to put in the spec, because they are an implementation detail.

I'll keep experimenting with it, but I don't think this is the holy grail of AI assisted coding.

osigurdson|9 months ago

I think a series of specs as the system instruction could help guide it. You can't just go from spec to app though, at least in my experience.

esafak|9 months ago

That is slow and expensive.

Artgor|9 months ago

Claude Code. And... Junie in Jetbrains IDE. It appeared recently and I'm really impressed by its quality. I think it is on the level of Claude Code.

Euphorbium|9 months ago

I think it uses claude code by default, it is literally the same thing, with different (better) interface.

khwhahn|9 months ago

I wish your own coding would just be augmented like somebody looking over your shoulder. The problem with the current AI coding is that you don't know your code base anymore. Basically, like somebody helping you figure out stuff faster, update documentation etc.

satvikpendem|9 months ago

You can do that by not using the agent workflow and just using the ask / chat feature, and coding the actual code yourself.

frainfreeze|9 months ago

"Throughly review my code change (git diff HEAD), {extra context etc}"

CuriouslyC|9 months ago

Personally, if you take the time to configure it well, I think Aider is vastly superior. You can have 4 terminals open in a grid and be running agentic coding workflows on them and 4x the throughput of someone in Cursor, whereas Cursor's UI isn't really amenable to running a bunch of instances and managing them all simultaneously. That plus Aider lets you do more complex automated Gen -> Typecheck -> Lint -> Test workflows with automated fixing.

findjashua|9 months ago

why can't you create separate git worktrees, and open each worktree in a separate IDE window? then you get the same functionality, no?

pbowyer|9 months ago

I've had trials for both running and tested both on the same codebases.

Cursor works roughly how I've expected. It reads files and either gets it right or wrong in agent mode.

Windsurf seems restricted to reading files 50 lines at a time, and often will stop after 200 lines [0]. When dealing with existing code I've been getting poorer results than Cursor.

As to autocomplete: perhaps I haven't set up either properly (for PHP) but the autocomplete in both is good for pattern matching changes I make, and terrible for anything that require knowledge of what methods an object has, the parameters a method takes etc. They both hallucinate wildly, and so I end up doing bits of editing in Cursor/Windsurf and having the same project open in PhpStorm and making use of its intellisense.

I'm coming to the end of both trials and the AI isn't adding enough over Jetbrains PhpStorm's built in features, so I'm going back to that until I figure out how to reduce hallucinations.

0. https://www.reddit.com/r/Codeium/comments/1hsn1xw/report_fro...

thesurlydev|9 months ago

I use the Windsurf Cascade plugin in JetBrains IDEs. My current flow is I rough-in the outline of what I want and have generally use the plugin to improve what I have by creating tests, performance improvements, or just making things more idiomatic. I need to invest time to add rules at both a global and project level which should make the overall experience event better.

leonidasv|9 months ago

I use the same setup, works like a charm.

ksymph|9 months ago

I've been flipping between the two, and overall I've found Cursor to be the better experience. Its autocomplete feels better at 'looking ahead' and figuring out what I might be doing next, while Windsurf tends to focus more on repeating whatever I've done recently.

Also, while Windsurf has more project awareness, and it's better at integrating things across files, actually trying to get it to read enough context to do so intelligently is like pulling teeth. Presumably this is a resource-saving measure but it often ends up taking more tokens when it needs to be redone.

Overall Cursor 'just works' better IME. They both have free trials though so there's little reason not to try both and make a decision yourself. Also, Windsurf's pricing is lower (and they have a generous free tier) so if you're on a tight budget it's a good option.

TiredOfLife|9 months ago

Windsurf autocomplete is free.

Cursor autocomplete stops working after trial ends.

jsumrall|9 months ago

Amazon Q. Claude Code is great (the best imho, what everything else measures against right now), and Amazon Q seems almost as good and for the first week I've been using it I'm still on the free tier.

The flat pricing of Claude Code seems tempting, but it's probably still cheaper for me to go with usage pricing. I feel like loading my Anthropic account with the minimum of $5 each time would last me 2-3 days depending on usage. Some days it wouldn't last even a day.

I'll probably give Open AI's Codex a try soon, and also circle back to Aider after not using it for a few months.

I don't know if I misundersand something with Cursor or Copilot. It seems so much easier to use Claude Code than Cursor, as Claude Code has many more tools for figuring things out. Cursor also required me to add files to the context, which I thought it should 'figure out' on its own.

wordofx|9 months ago

> I don't know if I misundersand something with Cursor or Copilot. It seems so much easier to use Claude Code than Cursor, as Claude Code has many more tools for figuring things out. Cursor also required me to add files to the context, which I thought it should 'figure out' on its own.

Cursor can find files on its own. But if you point it in the right direction it has far better results than Claude code.

jatins|9 months ago

this is the first time I am seeing someone says good things about Amazon Q

Do they publish any benchmark sheet on how it compares against others?

quintes|9 months ago

I remember asking Amazon Q something and it wouldn’t reply cuz of security policy or something. It was as far as I can remember a legit question around Iam policy which I was trying to configure. I figured it out back in Google search.

cvquesty|9 months ago

I feel a bit out of place here, as I’m not a dev… I come from the operational side, but do all my work in Puppet code. I was using Codeium + VSC and life was wonderful. One day, though, everything updated and Codeium was gone in favor of Windsurf and things got crazy. VSC no longer understood Puppet code and didn’t seem to be able to access the language tools from the native Puppet Development Kit plugins either.

The crazy part is my Vim setup has the Codeium plugins all still in place, and it works perfectly. I’m afraid if I update the plugin to a windsurf variant, it will completely “forget” about Puppet, its syntax, and everything it has “learned” from my daily workflow over the last couple years.

Has anyone else seen anything similar?

kioku|9 months ago

It might seem contrary to the current trend, but I've recently returned to using nvim as my daily driver after years with VS Code. This shift wasn't due to resource limitations but rather the unnecessary strain from agentic features consuming high amounts of resources.

hliyan|9 months ago

I think with the answer, each responder should include their level of coding proficiency. Or, at least whether they are able to (or even bother to) read the code that the tool generates. Preferences would vary wildly based on it.

emrah|9 months ago

> Neither? I'm surprised nobody has said it yet. I turned off AI autocomplete ...

This represents one group of developers and is certainly valid for that group. To each their own

For another group, where I belong, AI is a great companion! We can handle the noise and development speed is improved as well as the overall experience.

I prefer VSCode and GitHub copilot. My opinion is this combo will eventually eat all the rest, but that's besides the point.

Agent mode could be faster, sometimes it is rather slow thinking but not a big deal. This mode is all I use these days. Integration with the code base is a huge part of the great experience

anonymoushn|9 months ago

I evaluated Windsurf at a friend's recommendation around half a year ago and found that it could not produce any useful behaviors on files above a thousand lines or so. I understand this is mostly a property of the model, but certainly also a property of the approach used by the editor of just tossing the entire file in, yeah? I haven't tried any of these products since then, but it might be worth another shot because Gemini might be able to handle these files.

eadz|9 months ago

A year is a long time. Even in the past few months it has improved a lot.

sumedh|9 months ago

Windsurf has improved a lot in the last few months.

chironjit|9 months ago

I've been using cursor for several months. Absolutely hate Agent mode - it jumps too far ahead, and it's solutions, though valid, can overcomplicate the whole flow. It can also give you a whole bunch of code that you have to accept blindly will work, and is not super great at making good file layouts, etc. I've switched to autocomplete with the ask mode when I'm stuck

robch|9 months ago

I'm currently trying to use Windsurf at work since we have a license. The problem I have is that I find it's auto complete distracting from what I want to write. And when I do not know how to do a specific task, It will usually hallucinate an answer because it is not in it's training data. I mostly do staff level android tooling work which likely means it's not likely to be a common coding task.

deafpolygon|9 months ago

I use neovim now, after getting tired of the feature creep and the constant chasing of shiny new features.

AI is not useful when it does the thinking for you. It's just advanced snippets at that point. I only use LLMs to explain things or to clarify a topic that doesn't make sense right away to me. That's when it shows it's real strength.

sing AI for autocomplete? I turn it off.

snthpy|9 months ago

Cline?

triptych|9 months ago

I love Cline and use it every day. It works the way I think and makes smart decisions about features.

int_19h|9 months ago

... if you can afford it. Pay per token can get expensive with large context.

ReDeiPirati|9 months ago

Recently started using Cursor for adding a new feature on a small codebase for work, after a couple of years where I didn't code. It took me a couple of tries to figure out how to work with the tool effectively, but it worked great! I'm now learning how to use it with TaskMaster, it's such a different way to do and play with software. Oh, one important note: I went with Cursor also because of the pricing, that's despite confusing in term of fast vs slow requests, it smells less consumption base.

BTW There's a new OSS competitor in town that got the front a couple of days ago - Void: Open-source Cursor alternative https://news.ycombinator.com/item?id=43927926

Larrikin|9 months ago

I'm glad there are finally multiple options for agentic for JetBrains so I no longer have to sometimes switch over to VSCode and its various versions.

Copilot at work and Junie at home. I found nothing about my VSCode excursions to be better than Sublime or IntelliJ.

mattew|9 months ago

I’ve only played with Junie and Aider so far and like the approach the Junie agent takes of reading the code to understand it vs the Aider approach of using the repo map to understand the codebase.

artdigital|9 months ago

I find Windsurf almost unusable. It’s hard to explain but the same models in Zed, Windsurf, Copilot and Cursor produce drastically worse code in Windsurf for whatever reason. The agent also tends to do way more stupid things at random, like wanting to call MCP tools, creating new files, then suddenly forgetting how to use tools and apologizing a dozen times

I can’t really explain or prove it, but it was noticeable enough to me that I canceled my subscription and left Windsurf

Maybe a prompting or setting issue? Too high temperature?

Nowadays Copilot got good enough for me that it became my daily driver. I also like that I can use my Copilot subscription in different places like Zed, Aider, Xcode

int_19h|9 months ago

I have also observed this with Gemini 2.5 in Cursor vs Windsurf.

Cline seems to be the best thing, I suspect because it doesn't do any dirty tricks with trimming down the context under the hood to keep the costs down. But for the same reason, it's not exactly fun to watch the token/$ counter ticking as it works.

admiralrohan|9 months ago

Using Windsurf since the start and I am satisfied. Didn't look beyond it. Focused on actually doing the coding. It's impossible to keep up with daily AI news and if something groundbreaking happens it will go viral.

quickthoughts|9 months ago

VSCode with auto complete and Gemini 2.5 Pro in a standalone chat (pick any interface that works for you, eg librechat, vertex etc). The agents-in-a-IDE experience is hella slow in my opinion.

Plus its less about the actual code generation and more about how to use it effectively. I wrote a simple piece on how I use it to automate the boring parts of dev work to great effect https://quickthoughts.ca/posts/automate-smarter-maximizing-r...)

webprofusion|9 months ago

I just use Copilot (across VS Code, VS etc), it lets you pick the model you want and it's a fixed monthly cost (and there is a free tier). They have most of the core features of these other tools now.

Cursor, Windsurf et al have no "moat" (in startup speak), in that a sufficiently resourced organization (e.g. Microsoft) can just copy anything they do well.

VS code/Copilot has millions of users, cursor etc have hundreds of thousands of users. Google claims to have "hundreds of millions" of users but we can be pretty sure that they are quoting numbers for their search product.

vijucat|9 months ago

If you have any intellectual property worth protecting or need to comply with HIPAA, a completely local installation of Cline or Aider or Codeium with LM Studio with Qwen or DeepSeek Coder works well. If you'd rather not bother, I don't see any option to GitHub Copilot for Business. Sure, they're slower to catch up to Cursor, but catch up they will.

https://github.com/features/copilot/plans?cft=copilot_li.fea...

osigurdson|9 months ago

Which entity is going to steal your IP? Cursor / Windsurf or OpenAI / Anthropic?

asar|9 months ago

Personally, I've been using Cursor since day 1. Lately with Gemini 2.5 Pro. I've also started experimenting with Zed and local models served via ollama in the last couple of days. Unfortunately, without good results so far.

I've created a list of self-hostable alternatives to cursor that I try to keep updated. https://selfhostedworld.com/alternative/cursor/

ibrahimsow1|9 months ago

Windsurf. The context awareness is superior compared to cursor. It falls over less and is better at retrieving relevant snippets of code. The premium plan is cheaper too, which is a nice touch.

auggierose|9 months ago

How about Cursor vs. Windsurf vs. (Claude Desktop + MCP)?

Haven't tried out Cursor / Windsurf yet, but I can see how I can adapt Claude Desktop to specifically my workflow with a custom MCP server.

jzacharia|9 months ago

Neither! Neovim for most of my work and vscode w/ appropriate plugins when it's needed. If I need any LLM assistance I just run Claude Code in the terminal.

whywhywhywhy|9 months ago

Cursor is good for basic stuff but Windsurf consistently solves issues Cursor fails on even after 40+ mins of retries and prompting changes.

Cursor is very lazy about looking beyond the current context or even context at all sometimes it feels it’s trying to one shot a guess without looking deeper.

Bad thing about Windsurf is the plans are pretty limited and the unlimited “cascade base” feels dumb the times I used it so ultimately I use Cursor until I hit a wall then switch to Windsurf.

powerapple|9 months ago

I tested windsurf last week, it installed all dependencies to my global python....it didn't know best practices for Python, and didn't create any virtual env..... I am disappointed. My Cursor experience was slightly better. Still, one issue I had was how to make sure it does not change the part of code I don't want it to change. Every time you ask it to do something for A, it rewrote B in the process, very annoying.

My best experience so far is v0.dev :)

brahyam|9 months ago

Cursor. Good price, the predictive next edit is great, good enough with big code bases and with the auto mode i dont even spend all my prem requests.

I've tried VScode with copilot a couple of times and its frustrating, you have to point out individual files for edits but project wide requests are a pain.

My only pain is the workflow for developing mobile apps where I have to switch back and forth between Android Studio and Xcode as vscode extensions for mobile are not so good

suninsight|9 months ago

https://nonbios.ai - [Disclosure: I am working on this.]

- We are in public beta and free for now.

- Fully Agentic. Controllable and Transparent. Agent does all the work, but keeps you in the loop. You can take back control anytime and guide it.

- Not an IDE, so don't compete with VSCode forks. Interface is just a chatbox.

- More like Replit - but full stack focussed. You can build backend services.

- Videos are up at youtube.com/@nonbios

jonwinstanley|9 months ago

Has anyone had any joy using a local model? Or is it still too slow?

On something like a M4 Macbook Pro can local models replace the connection to OpenAi/Anthropic?

frainfreeze|9 months ago

For advanced autocomplete (not code generation, but can do that too), basic planning, looking things up instead of web search, review & summary, even one shooting smaller scripts, the 32b Q4 models proved very good for me (24gb VRAM RTX 3090). All LLM caveats still apply, of course. Note that setting up local llm in cursor is pain because they don't support local host. Ngrok or vps and reverse ssh solve that though.

int_19h|9 months ago

It's not so much that it's slow, it's that local models are still a far cry from what SOTA cloud LLM providers offer. Depending on what you're actually doing, a local model might be good enough.

monster_truck|9 months ago

Void. I'd rather run my own models locally https://voideditor.com/

traktorn|9 months ago

Which model are you running locally? Is it faster than waiting for Claudes generation? What gear do you use?

retinaros|9 months ago

I am using both. Windsurf feels complete less clunky. They are very close tho and the pace of major updates is crazy.

I dont like CLI based tools to code. Dont understand why they are being shilled. Claude code is maybe better at coding from scratch because it is only raw power and eating tokens like there is no tomorrow but it us the wrong interface to build anything serious.

websap|9 months ago

Currently using cursor. I've found cursor even without the AI features to be a more responsive VS Code. I've found the AI features to be particularly useful when I contain the blast radius to a unit of work.

If I am continuously able to break down my work into smaller pieces and build a tight testing loop, it does help me be more productive.

random42|9 months ago

How do you define a unit of work for your purposes?

geoffbp|9 months ago

Vs code with agent mode

mark_l_watson|9 months ago

I am retired now, out of the game, but I also suggest an alternative: running locally with open-codex, Ollama, and the qwen3 models and gemma3, and when necessary use something hosted like Gemini 2.5 Pro without an IDE.

I like to strike a balance between coding from scratch and using AI.

can16358p|9 months ago

While I haven't used Windsurf, I've been using Cursor and I LOVE it: especially the inline autocomplete is like reading my mind and making the work MUCH faster.

I can't say anything about Windsurf (as I haven't tried yet) but I can confidently say Cursor is great.

Alifatisk|9 months ago

Trae.ai actually, otherwise Windsurf

pyetro|9 months ago

The only thing preventing me to keep using Cursor/Windsurf it's the lack of sync feature. I use different machines and it's crucial to keep the same exact configuration on each of them :(

jfoster|9 months ago

I'm not sure the answer matters so much. My guess is that as soon as one of them gains any notable advantage over the other, the other will copy it as quickly as possible. They're using the same models under the hood.

ChocolateGod|9 months ago

I use Windsurf but it's been having ridiculous downtime lately.

I can't use Cursor because I don't use Ubuntu which is what their Linux packages are compiled against and they don't run on my non-Ubuntu distro of choice.

hnlurker22|9 months ago

Regardless of which, my favorite model is ChatGPT's. I feel they're the only ones talking to customers. The other models are not as pleasant to work with as a software engineer.

adamgroom|9 months ago

I like cursor, the autocomplete is great most of the time, as others have said use a shortcut to disable it.

The agents are a bit beta, it can’t solve bugs very often, and will write a load of garbage if you let it.

skrhee|9 months ago

Zed! I find it to be less buggy and generally more intuitive to use.

taherchhabra|9 months ago

Claude code is the best so far, I am using the 200$ plan. in terms of feature matrix all tools are almost same with some hits and misses but speed is something which claude code wins.

smerrill25|9 months ago

Do you think you use more than just 200$ worth of API credits in a month? I've used both Claude Code and Cursor, and I find myself liking the Terminal CLI, but the price is much more than 20$ per month for me.

Daedren|9 months ago

Considering Microsoft is closing down on the ecosystem, I'd pick VSCode with Copilot over those two.

It's a matter of time before they're shuttered or their experience gets far worse.

jonwinstanley|9 months ago

Unlikely they'll disappear. I currently use Cursor but am happy to change if a competitor is markedly better

sumedh|9 months ago

MS is slow to release new features though.

rcarmo|9 months ago

Neither. VS Code or Zed.

shaunxcode|9 months ago

neither : my pen is my autocomplete

sharedptr|9 months ago

Personally copilot/code assist for tab autocomplete, if I need longer boilerplate I request it to the LLM. Usually VIM with LSP.

Anything that’s not boilerplate I still code it

osigurdson|9 months ago

I use Windsurf and vim. Windsurf is good for pure exploration using a vibe coding style but prefer to hand-code anything that I am going to keep.

esha_manideep|9 months ago

Latest cursor update where they started charging for tokens is pretty good. I don't use non-MAX mode on cursor anymore

michelsedgh|9 months ago

best thing about cursor is $20 and u basically get unlimited requests. I know you get “slower” after a certain amount of requests but honestly you dont feel it being slow and reasoning models are taking so much to answer, so anyways you send the prompt and go doing other stuff, so the slowness i dont think it matters and basically unlimited compute u know?

ebr4him|9 months ago

Both, most times one works better than the other.

ramesh31|9 months ago

Cline beats both, and it costs nothing but direct token usage from your LLM provider.

Cursor/Windsurf/et. al. are pointless middlemen.

speedgoose|9 months ago

I’m using Github Copilot in VScode Insiders, mostly because I don’t want yet another subscription. I guess I’m missing out.

pk97|9 months ago

the age of swearing allegiance to a particular IDE/AI tool is over. I keep switching between Cursor and GH Copilot and for the most part they are very similar offerings. Then there's v0, Claude (for its Artifacts feature) and Cline which I use quite regularly for different requirements.

asdf6969|9 months ago

It changes too quickly for this to really matter. just pick the one you think looks and feels better to you

coolcase|9 months ago

Still on codeium lol! Might give aider another spin. It is never been quite good for my needs but tech evolves.

anaisbetts|9 months ago

Whatever the answer is, if you don't like it wait a week. They are constantly going back and forth.

pmelendez|9 months ago

I have had so much fun lately just with vanilla VS Code and Claude Code. Aider is a close second.

kotaKat|9 months ago

Neither. Do some real work instead instead of using some cancerous shitty autocomplete.

vasachi|9 months ago

I’d just wait a bit. At current rate of progress winner will be apparent sooner rather than later.

n_ary|9 months ago

I agree with /u/welder. Preferably neither. Both of these are custom and runs the risk of being acquired and enshittified in future.

If you are using VScode, get familiar with cline. Aider is also excellent if you don’t want to modify your IDE.

Additionally, Jetbrains IDEs now also have built-in local LLMs and their auto-complete is actually fast and decent. They also have added a new chat sidepanel in recent update.

The goal is NOT to change your workflow or dev env, but to integrate these tools into your existing flow, despite what the narrative says.

MangoCoffee|9 months ago

VScode + Github Copilot Pro. $10 per month to try out AI code assist is cheap enough

dvtfl|9 months ago

If you don't mind not having DAP and Windows support, then Zed is great.

CyberCub|9 months ago

For me, it's Cursor, sometimes Augment.

Gemini 2.5 + Claude 3.7 work very well

delduca|9 months ago

Zed. It is blazing fast.

da_me|9 months ago

Cursor for personal projects and Just Pycharm for work projects.

warthog|9 months ago

Windsurf - the repo code awareness is much higher than Cursor.

tacker2000|9 months ago

hijacking this thread: Whats the best AI tool for NeoVim ?

w4|9 months ago

I’ve really been enjoying the combination of CodeCompanion with Gemini 2.5 for chat, Copilot for completion, and Claude Code/OpenAI Codex for agentic workflows.

I had always wanted to get comfortable with Vim, but it never seemed worth the time commitment, especially with how much I’ve been using AI tools since 2021 when Copilot went into beta. But recently I became so frustrated by Cursor’s bugs and tab completion performance regressions that I disabled completions, and started checking out alternatives.

This particular combination of plugins has done a nice job of mostly replicating the Cursor functionality I used routinely. Some areas are more pleasant to use, some are a bit worse, but it’s nice overall. And I mostly get to use my own API keys and control the prompts and when things change.

I still need to try out Zed’s new features, but I’ve been enjoying daily driving this setup a lot.

sidcool|9 months ago

VS Code with Copilot.

urbandw311er|9 months ago

This is the way.

Getting great results both in chat, edit and now agentic mode. Don’t have to worry about any blocked extensions in the cat and mouse game with MS.

manojkumarsmks|9 months ago

Using cursor.. pretty good tool. Pick one and start.

rajasimon|9 months ago

I think zed is the answer you're looking for.

jlouis|9 months ago

Anything which can't exfiltrate your data

brokegrammer|9 months ago

Lately I switched to using a triple monitor setup and coding with both Cursor and Windsurf. Basically, the middle monitor has my web browser that shows the front-end I'm building. The left monitor has Cursor, and right one has Windsurf. I start coding with Cursor first because I'm more familiar with its interface, then I ask Windsurf to check if the code is good. If it is, then I commit. Once I'm done coding a feature, I'll also open VScode in the middle monitor, with Cline installed, and I will ask it to check the code again to make sure it's perfect.

I think people who ask the "either or" question are missing the point. We're supposed to use all the AI tools, not one or two of them.

throwaway4aday|9 months ago

Why not just write a script that does this but with all of the model providers and requests multiple completions from each? Why have a whole ass editor open just for code review?

cloudking|9 months ago

Tried them all extensively, Cursor is SOTA

vb-8448|9 months ago

pycharm + augment code + Gemini/Claude to generate the prompt for augment code.

indigodaddy|9 months ago

What do you use to generate the prompt for Gemini/Claude?

badmonster|9 months ago

i'm on cursor, performance has gone done. thinking about windsurf

m1117|9 months ago

Cursor is a better vibe IMO

LOLwierd|9 months ago

zed!! the base editor is just better then vscode.

and they just released agentic editing.

adocomplete|9 months ago

Amp.

Early access waitlist -> ampcode.com

dotemacs|9 months ago

This is a product by Sourcegraph https://sourcegraph.com who already have a solution in this space.

Is this something wildly different to Cody, your existing solution, or just a "subtle" attempt to gain more customers?

kondu|9 months ago

I'd love to try it, could you please share an invite? My email is on my profile page.

kixpanganiban|9 months ago

Interesting! Do you have an invite to spare? My email is in my bio

weiwenhao|9 months ago

tab completed is a nightmare when it comes to non-expected code.

anotheryou|9 months ago

Windsurf, no autocomplete.

you should also ask if people acutally used both :)

jit-it|9 months ago

Notepad++ best

welder|9 months ago

Neither? I'm surprised nobody has said it yet. I turned off AI autocomplete, and sometimes use the chat to debug or generate simple code but only when I prompt it to. Continuous autocomplete is just annoying and slows me down.

imiric|9 months ago

This is the way.

All this IDE churn makes me glad to have settled on Emacs a decade ago. I have adopted LLMs into my workflow via the excellent gptel, which stays out of my way but is there when I need it. I couldn't imagine switching to another editor because of some fancy LLM integration I have no control over. I have tried Cursor and VS Codium with extensions, and wasn't impressed. I'd rather use an "inferior" editor that's going to continue to work exactly how I want 50 years from now.

Emacs and Vim are editors for a lifetime. Very few software projects have that longevity and reliability. If a tool is instrumental to the work that you do, those features should be your highest priority. Not whether it works well with the latest tech trends.

elAhmo|9 months ago

Cursor/Windsurf and similar IDEs and plugins are more than autocomplete on steroids.

Sure, you might not like it and think you as a human should write all code, but frequent experience in the industry in the past months is that productivity in the teams using tools like this has greatly increased.

It is not unreasonable to think that someone deciding not to use tools like this will not be competitive in the market in the near future.

alentred|9 months ago

To be fair, I think the most value is added by Agent modes, not autocomplete. And I agree that AI-autocomplete is really quite annoying, personally I disable it too.

But coding agents can indeed save some time writing well-defined code and be of great help when debugging. But then again, when they don't work on a first prompt, I would likely just write the thing in Vim myself instead of trying to convince the agent.

My point being: I find agent coding quite helpful really, if you don't go overzealous with it.

blitzar|9 months ago

I shortcut the "cursor tab" and enable or disable it as needed. If only Ai was smart enough to learn when I do and don't want it (like clippy in the ms days) - when you are manually toggling it on/off clear patterns emerge (to me at least) as to when I do and don't want it.

nsteel|9 months ago

I can't even get simple code generation to work for VHDL. It just gives me garbage that does not compile. I have to assume this is not the case for the majority of people using more popular languages? Is this because the training data for VHDL is far more limited? Are these "AIs" not able to consume the VHDL language spec and give me actual legal syntax at least?! Or is this because I'm being cheap and lazy by only trying free chatGPT and I should be using something else?

InsideOutSanta|9 months ago

Yeah, I use IntelliJ with the chat sidebar. I don't use autocomplete, except in trivial cases where I need to write boilerplate code. Other than that, when I need help, I ask the LLM and then write the code based on its response.

I'm sure it's initially slower than vibe-coding the whole thing, but at least I end up with a maintainable code base, and I know how it works and how to extend it in the future.

medhir|9 months ago

+100. I’ve found the “chat” interface most productive as I can scope a problem appropriately.

Cursor, Windsurf, etc tend to feel like code vomit that takes more time to sift through than working through code by myself.

Draiken|9 months ago

Same here. It's extremely distracting to see the random garbage that the autocomplete keeps trying to do.

I said this in another comment but I'll repeat the question: where are these 2x, 10x or even 1.5x increases in output? I don't see more products, more features, less bugs or anything related to that since this "AI revolution".

I keep seeing this being repeated ad nauseam without any real backing of hard evidence.

If this was true and every developer had even a measly 30% increase in productivity, it would be like a team of 10 is now 13. The amount of code being produced would be substantially more and as a result we should see an absolute boom in new... everything.

New startups, new products, new features, bugs fixed and so much more. But I see absolutely nothing but more bullshit startups that use APIs to talk to these models with a few instructions.

Please someone show me how I'm wrong because I'd absolutely love to magically become way more productive.

nsonha|9 months ago

Your comment is about 2 years late. Autocomplete is not the focus of AI IDEs anymore, even though it has gotten really good with "next edit predicion". People use AI these days use it for the agentic mode.

admiralrohan|9 months ago

That is interesting. Which tech are you using?

Are you getting irrelevant suggestions as those autocompletes are meant to predict the things you are about to type.

chironjit|9 months ago

Absolutely hate the agent mode but I find autocomplete with asks to be the best for me. I like to at least know what I'm putting in my codebase and it genuinely makes me faster due to:

1) Stops me overthinking the solution 2)Being able to ask it pros and cons of different solutions 3) multi-x speedup means less worry about throwing away a solution/code I don't like and rewriting / refactoring 4) Really good at completing certain kinds of "boilerplate-y" code 5) Removed need to know the specific language implementation but rather the principle (for example pointers, structs, types, mutexes, generics, etc). My go to rule now is that I won't use it if I'm not familiar with the principle, and not the language implementation of that item 6) Absolute beast when it comes to debugging simple to medium complexity bugs

xnorswap|9 months ago

AI autocomplete can be infuriating if like me, you like to browse the public methods and properties by dotting the type. The AI autocomplete sometimes kicks in and starts writing broken code using suggestions that don't exist and that prevents quickly exploring the actual methods available.

I have largely disabled it now, which is a shame, because there are also times it feels like magic and I can see how it could be a massive productivity lever if it needed a tighter confidence threshold to kick in.

rco8786|9 months ago

This is where I landed too. Used Cursor for a while before realizing that it was actually slowing me down because the PR cycle took so much longer, due to all the subtle bugs in generated code.

Went back to VSCode with a tuned down Copilot and use the chat or inline prompt for generating specific bits of code.

nyarlathotep_|9 months ago

I'm past the honeymoon stage for LLM autocomplete.

I just noticed CLion moved to a community license, so I re-installed it and set up Copilot integration.

It's really noisy and somehow the same binding (tab complete) for built in autocomplete "collides" with LLM suggestions (with varying latency). It's totally unusable in this state; you'll attempt to populate a single local variable or something and end up with 12 lines of unrelated code.

I've had much better success with VSCode in this area, but the complete suggestions via LLM in either are usually pretty poor; not sure if it's related to the model choice differing for auto complete or what, but it's not very useful and often distracting, although it looks cool.

kristopolous|9 months ago

Agreed. You may like the arms-length stuff here: https://github.com/day50-dev/llmehelp . shell-hook.zsh and screen-query have been life-changing

I always forget syntax for things like ssh port forwarding. Now just describe it at the shell:

$ ssh (take my local port 80 and forward it to 8080 on the machine betsy) user@betsy

or maybe:

$ ffmpeg -ss 0:10:00 -i somevideo.mp4 -t 1:00 (speed it up 2x) out.webm

I press ctrl+x x and it will replace the english with a suggested command. It's been a total game changer for git, jq, rsync, ffmpeg, regex..

For more involved stuff there's screen-query: Confusing crashes, strange terminal errors, weird config scripts, it allows a joint investigation whereas aider and friends just feels like I'm asking AI to fuck around.

aqme28|9 months ago

I thought Cursor was dumb and useless too when I was just using autocomplete. It's the "agent chat" on the sidebar that is where it really shines.

wutwutwat|9 months ago

What folks don't understand, or keep in mind maybe, is that in order for that autocomplete to work, all your code is going up to a third party as you write it or open files. This is one of the reasons I disable it. I want to control what I send via the chat side panel by explicitly giving it context. It's also pretty useless most of the time, generating nonsense and not even consistently either.

et1337|9 months ago

I was 100% in agreement with you when I tried out Copilot. So annoying and distracting. But Cursor’s autocomplete is nothing like that. It’s much less intrusive and mostly limits itself to suggesting changes you’ve already done. It’s a game changer for repetitive refactors where you need to do 50 nearly identical but slightly different changes.

raverbashing|9 months ago

Yeah

AI autocomplete is a feature, not a product (to paraphrase SJ)

I can understand Windsurf getting the valuation as they had their own Codeium model

$B for a VSCode fork? Lol

vitro|9 months ago

I had turned autocomplete off as well. Way too many times it was just plain wrong and distracting. I'd like it to be turned on for method documentation only, though, where it worked well once the method was completed, but so far I wasn't able to customize it this way.

whywhywhywhy|9 months ago

Having it as tab was a mistake, tab complete for snippets is fine because it’s at the end of a line, tab complete in empty text space means you always have to be aware if it’s in autocomplete context or not before setting an indent.

aldanor|9 months ago

We have an internal ban policy on copilot for IP reasons and while I was... missing it initially, now just using neovim without any AI feels fine. Maybe I'll add an avante.nvim for a built-in chat box though.

owendarko|9 months ago

You could also use these AI coding features on a plug-and-play basis with an IDE extension.

For example, VS Code has Cline & Kilo Code (disclaimer: I help maintain Kilo).

Jetbrains has Junie, Zencoder, etc.

herdrick|9 months ago

The chat in what tool? Not Cursor nor Windsurf, it sounds like?

anshumankmr|9 months ago

It sometimes works really well, but I have at times been hampered by its autocomplete.

aaomidi|9 months ago

Honestly, the only files I like this turned on is unit tests.

unsupp0rted|9 months ago

Asking HN this is like asking which smartphone to use. You'll get suggestions for obscure Linux-based modular phones that weigh 6 kilos and lack a clock app or wifi. But they're better because they're open source or fully configurable or whatever. Or a smartphone that a fellow HNer created in his basement and plans to sell soon.

Cursor and Windsurf are both good, but do what most people do and use Cursor for a month to start with.

hackitup7|9 months ago

It's frightening how well you called this, if you scroll down the page literally exactly the dynamic that you mentioned is playing out in real time.

I use Cursor and I like it a lot.

mohsen1|9 months ago

haha so on point! In the HN world, backend are written in Rust with formal proof and frontend are in pure JS and maybe Web Components. In the real world however, a lot of people are using different tech

jasongill|9 months ago

"The clock app isn't missing! You just have to cross-compile it from source and flash a custom firmware that allows loading it!"

notepad0x90|9 months ago

Surely, you're not the only one here that doesn't share the open source extremist views. HN has a diverse user base.

pram|9 months ago

"It can't make calls yet because we're waiting on a module that doesn't taint the kernel"

kdqed|9 months ago

[deleted]

takets|9 months ago

[deleted]