(no title)
dugmartin | 29 days ago
I spend hours on a spec, working with Claude Code to first generate and iterate on all the requirements, going over the requirements using self-reviews in Claude first using Opus 4.5 and then CoPilot using GPT-5.2. The self-reviews are prompts to review the spec using all the roles and perspectives it thinks are appropriate. This self review process is critical and really polishes the requirements (I normally run 7-8 rounds of self-review).
Once the requirements are polished and any questions answered by stakeholders I use Claude Code again to create a extremely detailed and phased implementation plan with full code, again all in the spec (using a new file is the requirements doc is so large is fills the context window). The implementation plan then goes though the same multi-round self review using two models to polish (again, 7 or 8 rounds), finalized with a review by me.
The result? I can then tell Claude Code to implement the plan and it is usually done in 20 minutes. I've delivered major features using this process with zero changes in acceptance testing.
What is funny is that everything old is new again. When I started in industry I worked in defense contracting, working on the project to build the "black box" for the F-22. When I joined the team they were already a year into the spec writing process with zero code produced and they had (iirc) another year on the schedule for the spec. At my third job I found a literal shelf containing multiple binders that laid out the spec for a mainframe hosted publishing application written in the 1970s.
Looking back I've come to realize the agile movement, which was a backlash against this kind of heavy waterfall process I experienced at the start of my career, was basically an attempt to "vibe code" the overall system design. At least for me AI assisted mini-waterfall ("augmented cascade"?) seems a path back to producing better quality software that doesn't suffer from the agile "oh, I didn't think of that".
manmal|29 days ago
I'm not saying I don't believe your report - maybe you are working in a domain where everything is super deterministic. Anyway, I don't.
wenc|29 days ago
Writing a spec is akin to "working backwards" (or future backwards thinking, if you like) -- this is the outcome I want, how do I get there?
The process of writing the spec actually exposes the edge cases I didn't think of. It's very much in the same vein as "writing as a tool of thought". Just getting your thoughts and ideas onto a text file can be a powerful thing. Opus 4.5 is amazing at pointing out the blind spots and inconsistencies in a spec. The spec generator that I use also does some reasoning checks and adds property-based test generation (Python Hypothesis -- similar to Haskell's Quickcheck), which anchors the generated code to reality.
Also, I took to heart Grant Slatton's "Write everything twice" [1] heuristic -- write your code once, solve the problem, then stash it in a branch and write the code all over again.
> Slatton: A piece of advice I've given junior engineers is to write everything twice. Solve the problem. Stash your code onto a branch. Then write all the code again. I discovered this method by accident after the laptop containing a few days of work died. Rewriting the solution only took 25% the time as the initial implementation, and the result was much better. So you get maybe 2x higher quality code for 1.25x the time — this trade is usually a good one to make on projects you'll have to maintain for a long time.
This is effective because initial mental models of a new problem are usually wrong.
With a spec, I can get a version 1 out quickly and (mostly) correctly, poke around, and then see what I'm missing. Need a new feature? I tell the Opus to first update the spec then code it.
And here's the thing -- if you don't like version 1 of your code, throw it away but keep the spec (those are your learnings and insights). Then generate a version 2 free of any sunk-cost bias, which, as humans, we're terrible at resisting.
Spec-driven development lets you "write everything twice" (throwaway prototypes) faster, which improves the quality of your insights into the actual problem. I find this technique lets me 2x the quality of my code, through sheer mental model updating.
And this applies not just to coding, but most knowledge work, including certain kinds of scientific research (s/code/LaTeX/).
[1] https://grantslatton.com/software-pathfinding
airbreather|28 days ago
Having part of my background in Functional Safety, I have seen it done many times, and it can most definitely be done.
It is just it can't be done in the sort of time frames that people who do not specify before coding, are used to.
But, if you can't afford to move fast and break tings, because it is an airplane, or train signaling system, or complicated elevator system with multiple cars in the same shaft, then you generally write no code until you know exactly what you want the code to do (and more importantly, not do).
nl|29 days ago
They survive by being modified and I don't think that invalidates the process that got them in front of people faster than would otherwise have been possible.
This isn't a defence of waterfall though. It's really about increasing the pace of agile and the size of the loop that is possible.
AdamN|29 days ago
Agile was really pushing to make sure companies could get software live before they died (number 1) and to remedy the anti-pattern that appeared with number 2 where non-technical business people would write the (half-assed) spec and then technical people would be expected do the monkey work of implementing it.
aglavine|29 days ago
Agile core is the feedback loop. I can't believe people still don't get it. Feedback from reality is always faster than guessing on the air.
Waterfall is never great. The only time when you need something else than Agile is when lives are at stake, you need there formal specifications and rigorous testing.
SDD allows better output than traditional programming. It is similar to waterfall in the sense that the model helps you to write design docs in hours instead of days and take more into account as a result. But the feedback loop is there and it is still the key part in the process.
user3939382|29 days ago
Mostly I’ve seen agile as, let’s do the same thing 3x we could have done once if we spent time on specs. The key phrase here is “requirements analysis” and if you’re not good at it either your software sucks or you’re going to iterate needlessly and waste massive time including on bad architecture. You don’t iterate the foundation of a house.
I see scenarios where Agile makes sense (scoped, in house software, skunk works) but just like cloud, jwts, and several other things making it default is often a huge waste of $ for problems you/most don’t have.
Talk to the stakeholders. Write the specs. Analyze. Then build. “Waterfall” became like a dirty word. Just because megacorps flubbed it doesn’t mean you switch to flying blind.
bitwize|29 days ago
With the rise of AI, maybe programmers will be put back in their rightful place, as contributors of the final small piece of the development process: a translation from business terms to the language of the computer. Programming as a profession should, by all rights, be obsolete. We should be able to express the solution directly in business terms and have the translation take place automatically. Maybe that day will be here soon.
mentos|29 days ago
EliRivers|28 days ago
The key was this: "the requirements are polished and any questions answered by stakeholders"
We simply knew precisely what we were meant to be creating before we started creating it. I wonder to what degree the magic of "spec driven development" as you call it is just that, and using Claude code or some other similar is actually just the expression of being forced to understand and express clearly just what you actually want to create (compared to the much more prevalent model of just making things in the general direction and seeing how it goes).
yobbo|29 days ago
If you already know the requirements, it doesn't need to come into play.
AnimalMuppet|29 days ago
If you already know the requirements, and they aren't going to change for the duration of the project, then you don't need agile.
And if you have the time. I recently was on a project with a compressed timeline. The general requirements were known, but not in perfect detail. We began implementation anyway, because the schedule did not permit a fully phased waterfall. We had to adjust somewhat to things not being as we expected, but only a little - say, 10%. We got our last change of requirements 3 or 4 weeks before the completion of implementation. The key to making this work was regular, detailed, technical conversations between the customer's engineers, the requirements writers, and our implementers.
WillAdams|29 days ago
https://www.goodreads.com/book/show/15182720-design-by-contr...
but using a Large-Language-Model rather than a subordinate team?
c.f., https://se.inf.ethz.ch/~meyer/publications/old/dbc_chapter.p...
lII1lIlI11ll|29 days ago
No matter how much I tell it that it is a "professional experienced 10x developer versed in modern C++, a second coming of Stroustrup" in per-project or global config files it still keeps spewing the same crap big (like manual memory management instead of RAII here and there, initializing fields in ctor body instead of initializer list, having manual init/cleanup methods in classes instead of a proper ctor/dtor design to ensure that objects are always in a consistent state, bunch of other anti-patterns, etc.) and small (checking for nullptr before passing the pointer to delete/free, manually instantiating objects as argument to shared_ptr ctor instead of make_shared, endlessly casting stuff around back and forth instead of designing data types properly, etc.).
Which makes sense I guess because it is how average C++ code on GitHub looks like unfortunately and that is what all those models were trained on, but I keep feeling like my job turning into performing endless code review for a not-very- bright junior developer that just refuses to learn...
wenc|29 days ago
On the other hand, LLMs are great at Go because Go was designed for average engineers at scale, and LLMs behave like fast average engineers. Go as a language was designed to support minimal cleverness (there's only so many ways to do things, and abstractions are constrained). This kind of uniformity is catnip for LLM training.
wasmainiac|28 days ago
jmalicki|27 days ago
jll29|29 days ago
We know old-style classic waterfall lacks flexibility and agile lacks planning, but I don't see a reason why not to switch back and forth multiple times in the same project.
orochimaaru|29 days ago
Which AI, least effort is the specs so that’s the “greatest thing to do” again.
rcarmo|29 days ago
9rx|29 days ago
chrisweekly|29 days ago
using a new file IF the requirements doc is so large IT fills the context window
dugmartin|29 days ago
deterministic|28 days ago
catdog|29 days ago
jdjdjssh|29 days ago
[deleted]