top | item 47127071

(no title)

taftster | 6 days ago

You overestimate my ability to keep mental context for 6 months.

And additionally, most of the PRs I have seen reviewed, the quality hasn't really degraded or improved since LLMs have started contributing. I think we have been rubber stamping PRs for quite sometime. Not sure that AI is doing any worse.

discuss

order

ljm|6 days ago

Depends on what the context is, at least for me.

The cognitive load on a code review tends to be higher when its submitted by someone who hasn't been onboarded well enough and it doesn't matter if they used an AI or not. A lot of the mistakes are trivial or they don't align with status quo so the code review turns into a way of explaining how things should be.

This is in contrast to reviewing the code of someone who has built up their own context (most likely on the back of those previous reviews, by learning). The feedback is much more constructive and gets into other details, because you can trust the author to understand what you're getting at and they're not just gonna copy/paste your reply into a prompt and be like "make this make sense."

It's just offloading the burden to me because I have the knowledge in my head. I know at least one or two people who will end up being forever-juniors because of this and they can't be talked out of it because their colleague is the LLM now.

snowhale|6 days ago

[deleted]

taftster|6 days ago

Well, it should be the approver of the PR, not the author (AI slop or human slop) that is accountable. I don't ever want an AI to auto-approve a PR (or maybe only for very small things, like dependency-bot kind of tasks).

Not saying that's how it's done, in terms of accountability. The skin-in-the-game thing is hopefully still present, even with AI. But you're right, there's risk.