top | item 47130288

(no title)

SOLAR_FIELDS | 6 days ago

This is also where I think we end up. If the behavior of the system is specified well enough, then the code itself is cheap and throwaway. Why have a static system that is brittle to external changes when you can just reconstruct the system on the fly?

Might be quite awhile before you can do this with large systems but we already see this on smaller contextual scales such as Claude Code itself

discuss

order

candiddevmike|6 days ago

The specification for most systems _is the code_. English cannot describe business rules as succinctly as code, and most business rules end up being implied from a spec rather than directly specified, at least in my experience.

The thought of converting an app back into a spec document or list of feature requests seems crazy to me.

SOLAR_FIELDS|6 days ago

Why would it be? If you can describe an approximation of a system and regenerate it to be, let’s say, 98% accurate in 1% of the time that it would take to generate it by hand (and that’s being generous, it’s probably more like 0.1% in today’s day and age and that decimal is only moving left) aren’t there a giant set of use cases where the approximation of the system is totally fine? People will always bring up “but what about planes and cars and medicine and critical life or death systems”. Yeah sure, but a vast majority of the systems an end user interacts with every day do not have that level of risk tolerance

kamaal|5 days ago

>>If the behavior of the system is specified well enough, then the code itself is cheap and throwaway. Why have a static system that is brittle to external changes when you can just reconstruct the system on the fly?

You mean to say if the unit and functional tests cases are given the system must generate code for you? You might want to look at Prolog in that case.

>>Might be quite awhile before you can do this with large systems but we already see this on smaller contextual scales such as Claude Code itself

We have been able to do something like this reliably for like 50 years now.

Vegenoid|6 days ago

> If the behavior of the system is specified well enough

Then it becomes code: a precise symbolic representation of a process that can be unambiguously interpreted by a computer. If there is ambiguity, then that will be unsuitable for many systems.

SOLAR_FIELDS|6 days ago

The word “many” is carrying a lot of weight here. Given the probabilistic nature of AI I suspect that systems that are 98% correct will be just fine for all but the “this plane will crash” or “this person will get cancer” use cases. If the recreation of the system failed in that 2% by slightly annoying some end user, who gives a shit? If the stakes are low, and indeed they are for a large majority of software use cases, probabilistic approximation of everyone’s open source will do just fine.

If you’re worried about them achieving the 98%, worry no more, due to the probabilistic nature it will eventually converge on 9’s. Just keep sending the system through the probabilistic machine until it reaches your desired level of nines