(no title)
theredbeard | 20 days ago
The article's point about AI code being "someone else's code" hits different when you realize neither of you built the context. I've been measuring what actually happens inside AI coding sessions; over 60% of what the model sees is file contents and command output, stuff you never look at. Nobody did the work of understanding by building / designing it. You're reviewing code that nobody understood while writing it, and the model is doing the same.
This is why the evaluation problem is so problematic. You skipped building context to save time, but now you need that context to know if the output is any good. The investigation you didn't do upfront is exactly what you need to review the AI's work.
No comments yet.