top | item 47206176

(no title)

sarkarsh | 22 hours ago

The debugging gap in the Shen-Tamkin study is the part that worries me. AI tools are great at generating code from a clear spec. But the skill that atrophies is recognizing when code is subtly wrong — and that's the exact skill you need most when reviewing AI output.

What works for me: write the spec in a markdown file before handing it to the agent. Predict what it'll produce before reading the output. When it surprises you, figure out why before accepting. Keeps you engaged enough to actually build mental models.

The management pressure angle in this thread is real. If your org measures velocity in lines shipped, AI tools will optimize for that metric and comprehension dies quietly.

discuss

order

No comments yet.