Reading works when you generate 50 lines a day. When AI generates 5,000 lines of refactoring in 30 seconds, linear reading becomes a bottleneck. Human attention doesn't scale like GPUs. Trying to "just read" machine-generated code is a sure path to burnout and missed vulnerabilities. We need change summarization tools, not just syntax highlighting
Whether you or someone/something else wrote it is irrelevant
You’re expected to have self-reviewed and understand the changes made before requesting review. You must to be able to answer questions reviewers have about it. Someone must read the code. If not, why require a human review at all?
Not meeting this expectation = user ban in both kernel and chromium
This is exactly the gap I'm worried about. human review still matters, but linear reading breaks down once the diff is mostly machine-generated noise.
Summarizing what actually changed before reading feels like the only way to keep reviews sustainable.
veunes|1 month ago
uhfraid|1 month ago
You’re expected to have self-reviewed and understand the changes made before requesting review. You must to be able to answer questions reviewers have about it. Someone must read the code. If not, why require a human review at all?
Not meeting this expectation = user ban in both kernel and chromium
nuky|1 month ago
nuky|1 month ago