top | item 47136737

(no title)

neilwilson | 7 days ago

Once writing code is cheap you don't maintain code. You regenerate it from scratch.

What you maintain is the specification harness, and change that to change the code.

We have to start thinking at a higher level, and see code generation in the same way we currently see compilation.

discuss

order

simonw|7 days ago

I'm not sold on that idea yet.

I don't just have LLMs spit out code. I have them spit out code and then I try that code out myself - sometimes via reviewing it and automated tests, sometimes just by using it and confirming it does the right thing.

That upgrades the code to a status of generated and verified. That's a lot more valuable than code that's just generated but hasn't been verified.

If I throw it all away every time I want to make a change I'm also discarding that valuable verification work. I'd rather keep code that I know works!

neilwilson|6 days ago

I suspect that is where we will be going next - automated verification. At least to the point where we can pass it over the wall for user acceptance testing.

Is it possible to write Cucumber specs (for example) of sufficient clarity that allows an LLM agent team to generate code in any number of code languages that delivers the same outcome, and do that repeatedly?

Then we're at the point where we know the specs work. And is getting to the point where we know the specs work less effort than just coding directly?

We live in exciting times.

manuelabeledo|7 days ago

Unless the specification is also free of bugs and side effects, there is no guarantee that a rewrite would have fewer bugs.

Plenty of rewrites out there prove that point.

neilwilson|6 days ago

Yes, but that's the point.

We're not writing code in a computer language any more, we're writing specs in structured English of sufficient clarity that they can be generated from.

The debugging would be on the specs.

jimbokun|6 days ago

Tokens aren’t free.

Far more expensive than compilation and non deterministic so you’re not sure if you will get the same software if you give the AI the same spec.

neilwilson|6 days ago

You'll get the same software in outcome terms. Which is what we want.

Tokens are cheaper than getting an individual to modify the code, and likely the tokens will get cheaper - in the same way compilation has (which used to be batched once a day overnight in the mainframe era).

Non-determinism is how the whole LLM system works. All we're doing with agents is adding another layer of reinforcement learning that gets it to converge on the correct output.

That's also how routing protocols like OSPF work. There's no guarantee when those multicast packets will turn up, yet the routes converge and networks stay stable.

I think this fear of non-determinism needs to pass, but it will only pass if evidence of success arises.