There’s a third axis here besides “process vs result”: feedback loop latency. Hand-coding keeps the loop tight (think → type → run → learn), which is where a lot of the craft/joy lives. LLMs can either compress that loop (generate boilerplate/tests, unblock yak-shaves) or stretch it into “read 200 LOC of plausible code, then debug the one wrong assumption,” which feels like doing code review for an intern who doesn’t learn. The sweet spot for me has been using them to increase iteration speed while keeping myself on the hook for invariants (tests, types, small diffs), otherwise you’re just trading typing for auditing.
No comments yet.