(no title)
santadays | 1 month ago
I think when someone designs a software system, this is the root process, to break a problem into parts that can be manipulated. Humans do this well, and some humans do this surprisingly well. I suspect there is some sort of neurotransmitter reward when parsimony meets function.
Once we can manipulate those parts we tend to reframe the problem as the definition of those parts, the problem ceases to exist and what is left is only the solution.
With coding agents we end up in weird place, one, we have to just give them the problem, or we have to give them the solution. Giving them the solution means that we have to give them more and more details until they arrive at what we want. Giving an agent the problem we never really get the satisfaction of the problem dissolving into the solution.
At some level we have to understand what we want. If we don't we are completely lost.
When the problem changes we need to understand it, orient ourselves to it, find which parts still apply and which need to change and what needs to be added, if we had no part in the solution we are that much further behind in understanding it.
I think this, at an emotional level is what developers are responding to.
Assumptions baked into the article are:
You can keep adding features and Claude will just figure it out, sure, but for whom, and will they understand it.
Performance won't demand you prioritize feature A over feature B.
Security (that you don't understand) will be implemented over feature C, because Claude knows better.
Claude will keep getting more intelligent.
The only assumption I think is right, is that Claude will keep getting better. All the other assumptions require you know WTF you are doing (which we do, but for how long will we know what we are doing).
No comments yet.