(no title)
_pdp_ | 2 days ago
My second point is that this approach is fundamentally wrong for AI-first development. If the cost of writing code is approaching zero, there's no point investing resources to perfect a system in one shot. What matters more is how fast you can explore the edges. You can now spin up five agents to implement five different versions of the thing you're building and simply pick the best one.
In our shop, we have hundreds of agents working on various problems at any given time. Most of the code gets discarded. What we accept to merge are the good parts.
virgilp|2 days ago
Specification is worth writing (and spending a lot more time on than implementation) because it's the part that you can still control, fully read, understand etc. Once it gets into the code, reviewing it will be a lot harder, and if you insist on reviewing everything it'll slow things down to your speed.
> If the cost of writing code is approaching zero, there's no point investing resources to perfect a system in one shot.
THe AI won't get the perfect system in one shot, far from it! And especially not from sloppy initial requirements that leave a lot of edge (or not-so-edge) cases unadressed. But if you have a good requirement to start with, you have a chance to correct the AI, keep it on track; you have something to go back to and ask other AI, "is this implementation conforming to the spec or did it miss things?"
> five different versions of the thing you're building and simply pick the best one.
Problem is, what if the best one is still not good enough? Then what? You do 50? They might all be bad. You need a way to iterate to convergence
michaelbrave|2 days ago
manmal|2 days ago
__alexs|1 day ago
Doing OODA faster has always been the key thing to creating high quality outcomes.
nojito|2 days ago
This is an antiquated way of thinking. If you ramp up the number of agents you're using the auto-correcting and reviewing behavior kicks in which makes for much less human intervention until the final code review.
theptip|2 days ago
If you are vibe-coding, this approach is definitely going to kill you buzz and lose all the rapid iteration benefits.
But if you are working in an existing large system, vibe coding is hard to bring into the core. So I think something more formal like OP is needed to reap major benefits from AI.
zozbot234|2 days ago
petersumskas|2 days ago
Or you end up with five different mediocre solutions where the best parts are randomly distributed amongst all five.
noosphr|2 days ago
Now the bottom 98% can be given to a robot with a clear success signal other than 'it looks about right'.
hdhdhsjsbdh|2 days ago
What you’ve described is an incredibly expensive and inefficient genetic algorithm with a human review as the fitness function. It’s not the flex you might think it is.
baq|2 days ago
giancarlostoro|2 days ago
DaylitMagic|2 days ago
zppln|2 days ago
Eh, of course you can. You can specify anything as long as you know what you want it to do. This is like systems engineering 101 and people do it successfully all the time.
politician|2 days ago
kvdveer|2 days ago
LunicLynx|2 days ago
Also if you want to gain something by being less specific, eg. not writing code, and then want to be specific in writing a spec, then you just switched a precise system for an imprecise one.
tikhonj|2 days ago
Ideally—and at least somewhat in practice—a specification language is as much a tool for design as it is for correctness. Writing the specification lets you explore the design space of your problem quickly with feedback from the specification language itself, even before you get to implementing anything. A high-level spec lets you pin down which properties of the system actually matter, automatically finds an inconsistencies and forces you to resolve them explicitly. (This is especially important for using AI because an AI model will silently resolve inconsistencies in ways that don't always make sense but are also easy to miss!)
Then, when you do start implementing the system and inevitably find issues you missed, the specification language gives you a clear place to update your design to match your understanding. You get a concrete artifact that captures your understanding of the problem and the solution, and you can use that to keep the overall complexity of the system from getting beyond practical human comprehension.
A key insight is that formal specification absolutely does not have to be a totally up-front tool. If anything, it's a tool that makes iterating on the design of the system easier.
Traditionally, formal specification have been hard to use as design tools partly because of incidental complexity in the spec systems themselves, but mostly because of the overhead needed to not only implement the spec but also maintain a connection between the spec and the implementation. The tools that have been practical outside of specific niches are the ones that solve this connection problem. Type systems are a lightweight sort of formal verification, and the reason they took off more than other approaches is that typechecking automatically maintains the connection between the types and the rest of the code.
LLMs help smooth out the learning curve for using specification languages, and make it much easier to generate and check that implementations match the spec. There are still a lot of rough edges to work out but, to me, this absolutely seems to be the most promising direction for AI-supported system design and development in the future.
_pdp_|2 days ago
I'll just leave this here:
https://en.wikipedia.org/wiki/P_versus_NP_problem
robot-wrangler|2 days ago