(no title)
khazhoux | 13 days ago
* After a long vibe-coding session, I have to spend an inordinate amount of time cleaning up what Cursor generated. Any given page of code will be just fine on its own, but the overall design (unless I'm extremely specific in what I tell Cursor to do) will invariably be a mess of scattered control, grafted-on logic, and just overall poor design. This is despite me using Plan mode extensively, and instructing it to not create duplicate code, etc.
* I keep seeing metrics of 10s and 100s of thousands of LOC (sometimes even millions), without the authors ever recognizing that a gigantic LOC is probably indicative of terrible heisenbuggy code. I'd find it much more convincing if this post said it generated a 3K SQLite implementation, and not 19K.
Wondering if I'm just lagging in my prompting skills or what. To be clear, I'm very bullish on AI coding, but I do feel people are getting just a bit ahead of themselves in how they report success.
TheWas7ed|13 days ago
And for the most part I use either opus or sonnet, but for planning sometimes I switch to chatgpt since I think claude is too blunt and does not ask enough questions. I also have local setups with OLlama and have tried for personal projects some kimi models. The results are the same for all, but again claude models are slighly better.
kyars|13 days ago
viraptor|13 days ago
What model? Cursor doesn't generate anything itself, and there's a huge difference between gpt5.3-codex and composer 1 for example.
khazhoux|13 days ago
fatherzine|13 days ago
cf. SV conventional wisdom: he who ships first wins the market
in fairness, there is real value in iteration speed. i'm not holding my breath on human comprehensible corporate code bases moving forward. a slew of critical foundational projects, mostly run by the big names, may still care about what used to be called "good engineering practices".