top | item 47198834

(no title)

virgilp | 1 day ago

Nothing of what you write here matches my experience with AI.

Specification is worth writing (and spending a lot more time on than implementation) because it's the part that you can still control, fully read, understand etc. Once it gets into the code, reviewing it will be a lot harder, and if you insist on reviewing everything it'll slow things down to your speed.

> If the cost of writing code is approaching zero, there's no point investing resources to perfect a system in one shot.

THe AI won't get the perfect system in one shot, far from it! And especially not from sloppy initial requirements that leave a lot of edge (or not-so-edge) cases unadressed. But if you have a good requirement to start with, you have a chance to correct the AI, keep it on track; you have something to go back to and ask other AI, "is this implementation conforming to the spec or did it miss things?"

> five different versions of the thing you're building and simply pick the best one.

Problem is, what if the best one is still not good enough? Then what? You do 50? They might all be bad. You need a way to iterate to convergence

discuss

order

michaelbrave|18 hours ago

Same, I've sorta ended up converging on make a rough plan, get second and third opinions from various AI's on it, sort of decide and make choices while shaping the plan, which we turn into a detailed specsheet. Then follow the 'how to design programs' method which is mostly writing documentation first, then expected outcomes, then tests, then the functions, then test the flow of the pipeline. This usually looks like starting with Claude to write the documentation, expectations and create the scaffolding, then having Gemini write the tests and the code, then have codex try to run the pipeline and fix anything it finds that is broken along the way. I've found this to work fairly well, it's looser than waterfall, but waterfall-ish, but it's also sort of TDD-ish, and knowing that there will be failures and things to fix, but it also sort of knows the overall strategy and flow of how things will work before we start.

manmal|1 day ago

This. Waterfall never worked for a reason. Humans and agents both need to develop a first draft, then re-evaluate with the lessons learned and the structure that has evolved. It’s very very time consuming to plan a complex, working system up front. NASA has done it, for the moon landing. But we don’t have those resources, so we plan, build, evaluate, and repeat.

zozbot234|1 day ago

That "first draft" still has to start with a spec. Your only real choice is whether the spec is an actual part of project documentation with a human in the loop, or it's improvised on the spot within the AI's hidden thinking tokens. One of these choices is preferable to the other.

ErrantX|1 day ago

So, rollback and try again with the insight.

AI makes it cheap to implement complex first drafts and iterations.

I'm building a CRM system for my business; first time it took about 2 weeks to get a working prototype. V4 from scratch took about 5 hours.

Towaway69|18 hours ago

> NASA has done it, for the moon landing.

Which one? The one in 1960s or the one which has just been delayed - again?

I think you can just as well develop a first spec and iterate on than coding up a solution, important is exploration and iteration - in this specific case.

virgil_disgr4ce|22 hours ago

> Waterfall never worked for a reason

We're going to need some evidence for this claim. I feel like nearly 70 years of NASA has something to say about this.

osigurdson|11 hours ago

"Waterfall" was primarily a strawman that the agile salesman made up. Sure, it existed it some form but was not widely practiced.

__alexs|13 hours ago

You claim to disagreeing with OP but you seem to be describing basically the same core loop of planning and execution.

Doing OODA faster has always been the key thing to creating high quality outcomes.

virgilp|6 hours ago

No, OP literally claims "you can't spec out something you have no clue how to build"; I claim that on the contrary, you absolutely can - you don't need to know "how to build" but you need to clarify what you want to build. You can't ask AI to build something (and actually obtain a good "something") until you can say exactly what the said "something" is.

You iterate, yes - sometimes because the AI gets it wrong; and sometimes because you got it wrong (or didn't say exactly what you wanted, and AI assumed you wanted something else). But the less specific and clear you are in your requirements, the less likely it is you'll actually get what you want. With you not being specific in the requirements, it only really works if you want something that lots of people are building/have built before, because that will allow the AI to make correct assumptions about what to build.

nojito|22 hours ago

>THe AI won't get the perfect system in one shot, far from it! And especially not from sloppy initial requirements that leave a lot of edge (or not-so-edge) cases unadressed. But if you have a good requirement to start with, you have a chance to correct the AI, keep it on track; you have something to go back to and ask other AI, "is this implementation conforming to the spec or did it miss things?"

This is an antiquated way of thinking. If you ramp up the number of agents you're using the auto-correcting and reviewing behavior kicks in which makes for much less human intervention until the final code review.

galaxyLogic|19 hours ago

Yes, but what about the "spec-review"? Isn't that even more important? Is the system doing what we (and its users) need it to be doing?