(no title)
socketcluster | 22 days ago
If however, your code foundations are good and highly consistent and never allow hacks, then the AI will maintain that clean style and it becomes shockingly good; in this case, the prompting barely even matters. The code foundation is everything.
But I understand why a lot of people are still having a poor experience. Most codebases are bad. They work (within very rigid constraints, in very specific environments) but they're unmaintainable and very difficult to extend; require hacks on top of hacks. Each new feature essentially requires a minor or major refactoring; requiring more and more scattered code changes as everything is interdependent (tight coupling, low cohesion). Productivity just grinds to a slow crawl and you need 100 engineers to do what previously could have been done with just 1. This is not a new effect. It's just much more obvious now with AI.
I've been saying this for years but I think too few engineers had actually built complex projects on their own to understand this effect. There's a parallel with building architecture; you are constrained by the foundation of the building. If you designed the foundation for a regular single storey house, you can't change your mind half-way through the construction process to build a 20-storey skyscraper. That said, if your foundation is good enough to support a 100 storey skyscraper, then you can build almost anything you want on top.
My perspective is if you want to empower people to vibe code, you need to give them really strong foundations to work on top of. There will still be limitations but they'll be able to go much further.
My experience is; the more planning and intelligence goes into the foundation, the less intelligence and planning is required for the actual construction.
ekidd|22 days ago
mattgreenrocks|22 days ago
Asked it to spot check a simple rate limiter I wrote in TS. Super basic algorithm: let one action through every 250ms at least, sleeping if necessary. It found bogus errors in my code 3 times because it failed to see that I was using a mutex to prevent reentrancy. This was about 12 lines of code in total.
My rubber duck debugging session was insightful only because I had to reason through the lack of understanding on its part and argue with it.
sandos|22 days ago
But which codebase is perfect, really?
redox99|22 days ago
raw_anon_1111|22 days ago
I just did my first “AI native coding project”. Both because for now I haven’t run into any quotas using Codex CLI with my $20/month ChatGPT subscription and the company just gave everyone an $800/month Claude allowance.
Before I even started the implementation I:
1. Put the initial sales contract with the business requirements.
2. Notes I got from talking to sales
3. The transcript of the initial discovery calls
4. My design diagrams that were well labeled (cloud architecture and what each lambda does)
5. The transcript of the design review and my explanations and answering questions.
6. My ChatGPT assisted breakdown of the Epics/stories and tasks I had to do for the PMO
I then told ChatGPT to give a detailed breakdown of everything during the session as Markdown
That was the start of my AGENTS.md file.
While working through everything task by task and having Codex/Claude code do the coding, I told it to update a separate md file with what it did and when I told it to do something differently and why.
Any developer coming in after me will have complete context of the project from the first git init and they and the agents will know the why behind every decision that was made.
Can you say that about any project that was done before GenAI?
bonesss|22 days ago
… a project with a decomposition of top level tasks, minutes and meeting notes, a transcript, initial diagrams, a bunch of loose transcripts on soon to be outdated assumptions and design, and then a soon-to-be-outdated living and constantly modified AGENT file that will be to some extent added to some context and to some extent ignored and to some extent lie about whether it was consulted (and then to some extent lie more about if it was then followed)? Hard yes.
I have absolutely seen far better initial project setups that are more complete, more focused, more holistically captured, and more utilitarian for the forthcoming evolution of design and system.
Lots of places have comparable design foundations as mandatory, and in some well-worn government IT processes I’m aware of the point being described is a couple man-months or man-years of actual specification away from initial approval for development.
Anyone using issue tracking will have better, searchable, tracking of “why”, and plenty of orgs mandate that from day 1. Those orgs likely are tracking contracts separately too — that kind of information is a bit special to have in a git repo that may have a long exciting life of sharing.
Subversion, JIRA, and basic CRM setups all predate GPTs public launch.
apsurd|22 days ago
Tbh, I'm not exactly knocking it, it makes sense that leads are responsible for the architecture. I just worry that those leads having 100x influence is not default a good thing.
dijksterhuis|22 days ago
yes. the linux kernel and it's extensive mailing lists come to mind. in fact, any decent project which was/is built in a remote-only scenario tends to have extensive documentation along these lines, something like gitlab comes to mind there.
personally i've included design documents with extensive notes, contracts, meeting summaries etc etc in our docs area / repo hosting at $PREVIOUS_COMPANY. only thing from your list we didn't have was transcripts because they're often less useful than a summary of "this is what we actually decided and why". edit -- there were some video/meeting audio recordings we kept around though. at least one was a tutoring session i did.
maybe this is the first time you've felt able to do something like this in a short amount of time because of these GenAI tools? i don't know your story. but i was doing a lot of this by hand before GenAI. it took time, energy and effort to do. but your project is definitely not the first to have this level of detailed contextual information associated with it. i will, however, concede that these tools can make it it easier/faster to get there.
0000000000100|22 days ago
After rearchitecting the foundations (dumping bootstrap, building easy-to-use form fields, fixing hardcoded role references 1,2,3…, consolidating typescript types, etc.) it makes much better choices without needing specific guidance.
Codex/Claude Code won’t solve all your problems though. You really need to take some time to understand the codebase and fixing the core abstractions before you set it loose. Otherwise, it just stacks garbage on garbage and gets stuck patching and won’t actually fix the core issues unless instructed.
adithyassekhar|22 days ago
No projects, unless it's only you working on it, only yourself as the client, and is so rigid in it's scope, it's frankly useless, will have this mythical base. Over time the needs change, there's no sticking to the plan. Often it's a change that requires rethinking a major part. What we loathe as tight coupling was just efficient code with the original requirements. Then it becomes a time/opportunity cost vs quality loss comparison. Time and opportunity always wins. Why?
Because we live in a world run by humans, who are messy and never sticks to the plan. Our real world systems (bureaucracy , government process, the list goes on) are never fully automated and always leaves gaps for humans to intervene. There's always a special case, an exception.
Perfectly architected code vs code that does the thing have no real world difference. Long term maintainability? Your code doesn't run in a vaccum, it depends on other things, it's output is depended on by other things. Change is real, entropy is real. Even you yourself, you perfect programmer who writes perfect code will succumb eventually and think back on all this with regret. Because you yourself had to choose between time/opportunity vs your ideals and you chose wrong.
Thanks for reading my blog-in-hn comment.
mattgreenrocks|22 days ago
It’s fascinating watching the sudden resurgence of interest in software architecture after people are finding it helps LLMs move quickly. It has been similarly beneficial for humans as well. It’s not rocket science. It got maligned because it couldn’t be reduced to an npm package/discrete process that anyone could follow.
zozbot234|22 days ago
dwallin|22 days ago
mexicocitinluez|21 days ago
This is naive. I've been building an EMR in the healthcare space for 5 years now as part of an actual provider. We've incrementally released small chunks when they're ready. The codebase I've built is the most consistent codebase I've ever been a part of.
It's bureaucracy AND government process AND constantly changing priorities and regulations and requirements from insurance providers all wrapped up into one. And as such, we have to take our time.
Go and tell the clinicians currently using it that it's not useful. I'm sure they won't agree.
> Perfectly architected code vs code that does the thing have no real world difference
This just flat out isn't true. Just because YOU haven't experience it (and I think you're quite frankly telling on yourself with this) doesn't mean it doesn't exist at all.
> Because you yourself had to choose between time/opportunity vs your ideals and you chose wrong.
Like I said above, you're telling on yourself. I'm not saying I've never been in this situation, but I am saying that it's not the only way to build software.
nananana9|22 days ago
Given how adamant some people I respect a lot are about how good these models are, I was frankly shocked to see SOA models do transformations like
When I point this out, it extracts said 20 lines into a function that takes in the entire context used in the block as arguments: It also tends to add these comments that don't document anything, but rather just describe the latest change it did to the code: and to top it off it has the audacity to tell me "The code is much cleaner now. Happy building! (rocketship emoji)"isodev|22 days ago
0000000000100|22 days ago
E.g pumping out a ton of logic to convert one data structure to another. Like a poorly structured form with random form control names that don’t match to the DTO. Or single properties for each form control which are then individually plugged into the request DTO.
Qworg|22 days ago
A poor foundation is a design problem. Throw it away and start again.
zozbot234|22 days ago
Avshalom|22 days ago
echelon|22 days ago
I am beginning to build a high degree of trust in the code Claude emits. I'm having to step in with corrections less and less, and it's single shotting entire modules 500-1k LOC, multiple files touched, without any trouble.
It can understand how frontend API translates to middleware, internal API service calls, and database queries (with a high degree of schema understanding, including joins).
(This is in a Rust/Actix/Sqlx/Typescript/nx monorepo, fwiw.)
jim180|22 days ago
Right know I'm building NNTP client for macOS (with AppKit), because why not, and initially I had to very carefully plan and prompt what AI has to do, otherwise it would go insane (integration tests are must).
Right know I have read-only mode ready and its very easy to build stuff on top of it.
Also, I had to provide a lot of SKILLS to GPT5.3
dustingetz|22 days ago
kingraoul|22 days ago
anupamchugh|22 days ago
RataNova|22 days ago
napierzaza|22 days ago
[deleted]