A lot of more senior coders when they actively try vibe coding a greenfield project find that it does actually work. But only for the first ~10kloc. After that the AI, no matter how well you try to prompt it, will start to destroy existing features accidentally, will add unnecessary convoluted logic to the code, will leave benhind dead code, add random traces "for backwards compatibility", will avoid doing the correct thing as "it is too big of a refactor", doesn't understand that the dev database is not the prod database and avoids migrations. And so forth.I've got 10+ years of coding experience, I am an AI advocate, but not vibe coding. AI is a great tool to help with the boring bits, using it to initialize files, help figure out various approaches, as a first pass code reviewer, helping with configuring, those things all work well.
But full-on replacing coders? It's not there yet. Will require an order of magnitude more improvement.
throwaw12|22 days ago
I am using them in projects with >100kloc, this is not my experience.
at the moment, I am babysitting for any kloc, but I am sure they will get better and better.
roywiggins|22 days ago
I am sure there are ways to get around this sort of wall, but I do think it's currently a thing.
yencabulator|22 days ago
> Somehow 90% of these posts don't actually link to the amazing projects that their author is supposedly building with AI.
You are in the 90%.
christophilus|22 days ago
That said, I do catch it doing some of the stuff the OP mentioned— particularly leaving “backwards compatibility” stuff in place. But really, all of the stuff he mentions, I’ve experienced if I’ve given it an overly broad mandate.
turnsout|22 days ago
You also need a reasonably modular architecture which isn't incredibly interdependent, because that's hard to reason about, even for humans.
You also need lots and lots (and LOTS) of unit tests to prevent regressions.
joquarky|21 days ago
Then let me introduce you to a useful concept:
https://en.wikipedia.org/wiki/Separation_of_concerns
qingcharles|21 days ago
I've learned with LLM coded apps to break stuff into very small manageable chunks so they can work on the tiny piece and not get screwed by big context.
For the most part, this actually produces a cleaner codebase.
AlexCoventry|22 days ago
Surely it depends on the design. If you have 10 10kloc modular modules with good abstractions, and then a 10k shell gluing them together, you could build much bigger things, no?
dumbmrblah|23 days ago
izacus|22 days ago
mettamage|22 days ago
andai|22 days ago
WillPostForFood|22 days ago
frank_nitti|22 days ago
If the person who is liable for the system behavior cannot read/write code (as “all coders have been replaced”), does Anthropic et al become responsible for damages to end users for systems its tools/models build? I assume not.
How do you reconcile this? We have tools that help engineers design and build bridges, but I still wouldn’t want to drive on an “autonomously-generated bridge may contain errors. Use at own risk” because all human structural engineering experts have been replaced.
After asking this question many times in similar threads, I’ve received no substantial response except that “something” will probably resolve this, maybe AI will figure it out
alpineman|23 days ago
If you spend a couple of years with an LLM really watching and understanding what it’s doing and learning from mistakes, then you can get up the ladder very quickly.
Nextgrid|23 days ago
A "basic" understanding in critical domains is extremely dangerous and an LLM will often give you a false sense of security that things are going fine while overlooking potential massive security issues.
spprashant|23 days ago
I don't feel like most providers keep a model for more than 2 years. GPT-4o got deprecated in 1.5 years. Are we expecting coding models to stay stable for longer time horizons?
unknown|23 days ago
[deleted]
dickersnoodle|22 days ago