(no title)
neilwilson | 7 days ago
What you maintain is the specification harness, and change that to change the code.
We have to start thinking at a higher level, and see code generation in the same way we currently see compilation.
neilwilson | 7 days ago
What you maintain is the specification harness, and change that to change the code.
We have to start thinking at a higher level, and see code generation in the same way we currently see compilation.
simonw|7 days ago
I don't just have LLMs spit out code. I have them spit out code and then I try that code out myself - sometimes via reviewing it and automated tests, sometimes just by using it and confirming it does the right thing.
That upgrades the code to a status of generated and verified. That's a lot more valuable than code that's just generated but hasn't been verified.
If I throw it all away every time I want to make a change I'm also discarding that valuable verification work. I'd rather keep code that I know works!
neilwilson|6 days ago
Is it possible to write Cucumber specs (for example) of sufficient clarity that allows an LLM agent team to generate code in any number of code languages that delivers the same outcome, and do that repeatedly?
Then we're at the point where we know the specs work. And is getting to the point where we know the specs work less effort than just coding directly?
We live in exciting times.
manuelabeledo|7 days ago
Plenty of rewrites out there prove that point.
jimbokun|6 days ago
https://www.joelonsoftware.com/2000/04/06/things-you-should-...
neilwilson|6 days ago
We're not writing code in a computer language any more, we're writing specs in structured English of sufficient clarity that they can be generated from.
The debugging would be on the specs.
jimbokun|6 days ago
Far more expensive than compilation and non deterministic so you’re not sure if you will get the same software if you give the AI the same spec.
neilwilson|6 days ago
Tokens are cheaper than getting an individual to modify the code, and likely the tokens will get cheaper - in the same way compilation has (which used to be batched once a day overnight in the mainframe era).
Non-determinism is how the whole LLM system works. All we're doing with agents is adding another layer of reinforcement learning that gets it to converge on the correct output.
That's also how routing protocols like OSPF work. There's no guarantee when those multicast packets will turn up, yet the routes converge and networks stay stable.
I think this fear of non-determinism needs to pass, but it will only pass if evidence of success arises.