(no title)
ai_assisted_dev | 9 months ago
I could probably go much lower, and find a model that is dirt cheap but takes a while; but right now the cutting edge (for my own work) is Claude 4 (non-max / non-thinking). To me it feels like Cursor must be hemorrhaging money. The thing that works for me is that I am able to justify those costs working on my own services, that has some customers, and each added feature gives me almost immediate return on investment. But to me it feels like the current rates that cursor charges are not rooted in reality.
Quickly checking Cursor for the past 4 day period:
Requests: 1049
Lines of Agent Edits: 301k
Tabs accepted: 84
Personally, I have very little complaints or issues with cursor. Only a growing wish list of more features and functionality. Like how cool would it be if asynchronous requests would work? Rather than just waiting for a single request to complete on 10 files, why can't it work on those 10 files in parralel at the same time? Because now so much time is spend waiting for the request to complete (while I work on another part of the app in a different workspace with Cursor).
shafyy|9 months ago
They don't make any money. They are burning VC money. Anthropic and OpenAI are probably also not making moeny, but Cursor is making "more no money" than others.
m101|9 months ago
wordofx|9 months ago
JimDabell|9 months ago
bravesoul2|9 months ago
It's like a horse race.
But yeah enjoy the subsidies. It's like the cheap Ubers of yesteryear.
aitchnyu|9 months ago
echelon|9 months ago
Switching costs are zero and software folks are keen to try new things.
ukuina|9 months ago
You can open up to three parallel chat tabs by pressing Cmd+T
https://docs.cursor.com/kbd
Each chat tab is a full Agent by itself!
sfmike|9 months ago
jama211|8 months ago
toephu2|9 months ago
h1fra|9 months ago
wussboy|9 months ago
cess11|9 months ago
What does this measurement mean?
1049 / (4 * 8) ~= 32 seconds, on average. Doesn't look like much waiting to me.
adidoit|9 months ago
usrbinbash|9 months ago
The problem with generative ai workloads: The costs rise linerly with the number of requests, because you need to compute every query.
ipnon|9 months ago
GaboGomez|9 months ago
wg0|9 months ago
Both are genuine questions.
ai_assisted_dev|9 months ago