Strange article. The problem isn’t that everyone doesn’t know how everything works, it’s that AI coding could mean there is no one who knows how a system works.
No I think the problem is AI coding removes intentionality. And that introduces artifacts and connections and dependencies that shouldn’t be there if one had designed the system with intent. And that makes it eventually harder to reason about.
There is a difference in qualia in it happens to work and it was made for a purpose.
Business logic will strive more for it happens to work as a good enough.
The core problem is irresponsibility. Things that happen to work may stop working, or be revealed to have terrible flaws. Who is responsible? What is their duty of care?
Excellent point. The intention of business is profit, how it arrives there is considered incidental. Any product no matter what, as long as it sells. Compounding effects in computing, the internet and miniaturisation, have enabled large profit margins that further compound these effects. They think of this as a machine that can keep on printing more money and subsuming more and more as software and computers are pervasive.
Including the AI, which generated it once and forgot.
This is going to be a big problem. How do people using Claude-like code generation systems do this? What artifacts other than the generated code are left behind for reuse when modifications are needed? Comments in the code? The entire history of the inputs and outputs to the LLM? Is there any record of the design?
I have experimented with telling Claude Code to keep a historical record of the work it is performing. It did work (though I didn't assess the accuracy of the record) but I decided it was a waste of tokens and now direct it to analyze the history in ~/.claude when necessary. The real problem I was solving was making sure it didn't leave work unfinished between autocompacts (eg crucial parts of the work weren't performed and instead there are only TODO comments). But I ended up solving that with better instructions about how to break down the plan into bite-sized units that are more friendly to the todo list tool.
I have prompting in AGENTS.md that instructs the agent to update the relevant parts of the project documentation for a given change. The project has a spec, and as features get added or reworked the spec gets updated. If you commit after each session then the git history of the spec captures how the design evolves. I do read the spec, and the errors I've seen so far are pretty minor.
I for one I save all conversations in the codebase. Includes both human prompts and outputs. But I’m using a modified codex to do so.
Not sure why it’s not default as it’s useful to have this info.
I am sure engineers collectively understand how the entire stack works.
With LLM generated output, nobody understands how anything works, including the very model you just interacted with -- evident in "you are absolutely correct"
Just because there is someone who could understand a given system, that doesn’t mean there is anyone who actually does. I take the point to be that existing software systems are not understood by anyone most of the time.
I do not know about you all, but I need to understand the system before I can change anything, otherwise I would introduce tons of bugs. Heck, without knowing the system I do not even know what I *want* to change.
This happens even today. If a knowledgeable person leaves a company and no KT (or more likely, poor KT) takes place, then there will be no one left to understand how certain systems work. This means the company will have to have a new developer go in and study the code and then deduce how it works. In our new LLM world, the developer could even have an LLM construct an overview for him/her to come up to speed more quickly.
Yes but every time the "why" is obscured perhaps not completely because there's no finished overview or because the original reason cannot be derived any longer from the current state of affairs. Its like the movie memento: you're trying to piece together a story from fragments that seem incoherent.
lynguist|21 days ago
There is a difference in qualia in it happens to work and it was made for a purpose.
Business logic will strive more for it happens to work as a good enough.
satisfice|21 days ago
stoneforger|21 days ago
raw_anon_1111|21 days ago
Animats|21 days ago
This is going to be a big problem. How do people using Claude-like code generation systems do this? What artifacts other than the generated code are left behind for reuse when modifications are needed? Comments in the code? The entire history of the inputs and outputs to the LLM? Is there any record of the design?
maxbond|21 days ago
I have prompting in AGENTS.md that instructs the agent to update the relevant parts of the project documentation for a given change. The project has a spec, and as features get added or reworked the spec gets updated. If you commit after each session then the git history of the spec captures how the design evolves. I do read the spec, and the errors I've seen so far are pretty minor.
luckydata|21 days ago
skeptic_ai|21 days ago
sceptic123|21 days ago
We already don't know how everything works, AI is steering us towards a destination where there is more of the everything.
I would also add it's also possible it will reduce the number people that are _capable_ of understanding the parts it is responsible for.
g947o|21 days ago
I am sure engineers collectively understand how the entire stack works.
With LLM generated output, nobody understands how anything works, including the very model you just interacted with -- evident in "you are absolutely correct"
dcre|21 days ago
1718627440|20 days ago
ahnick|21 days ago
stoneforger|21 days ago
noosphr|21 days ago