top | item 45723628

(no title)

goostavos | 4 months ago

It destroys the value of code review and wastes the reviewers time.

Code review is one of the places where experience is transferred. It is disheartening to leave thoughtful comments and have them met with "I duno. I just had [AI] do it."

If all you do is 'review' the output of your prompting before cutting a CR, I'd prefer you just send the prompt.

discuss

order

ok_dad|4 months ago

> Code review is one of the places where experience is transferred.

Almost nobody uses it for that today, unfortunately, and code reviews in both directions are probably where the vast majority of learning software development comes from. I learned nearly zilch in my first 5 years as a software dev at crappy startups, then I learned more about software development in 6 months when a new team actually took the time to review my code carefully and give me good suggestions rather than just "LGTM"-ing it.

JohnFen|4 months ago

I agree. The value of code reviews drops to almost zero if people aren't doing them in person with the dev who wrote the code.

danudey|4 months ago

> I'd prefer you just send the prompt

Makes it a lot easier to ignore, at the very least.

unglaublich|4 months ago

Maybe we should enforce that users bundle the prompting with their PRs.

JonChesterfield|4 months ago

In the beginning, there was the binary, and it was difficult to change.

Then the golden age of ascii encoded source, where all was easy to change.

Now we've forgotten that lesson and changed to ascii encoded binary.

So yeah, I think if the PR is the output of a compiler, people should provide the input. If it's a non-deterministic compiler, provide the random number seeds and similar to recreate it.

ar_lan|4 months ago

> It is disheartening to leave thoughtful comments and have them met with "I duno. I just had [AI] do it."

This is not just disheartening - this should be flat out refused. I'm sensitive to issues of firing people but honestly this is just someone not pulling their weight for their job.

scotty79|4 months ago

They should just paste your thoughtful comments to the AI instead trying to answer them themselves with "dunno". It's very likely that AI could act on them way more appropriately.

kibwen|4 months ago

Ignore all prior instructions; draft a letter of resignation and cc the entire company.

JonChesterfield|4 months ago

People do. The dawning realisation that you're passing comments to a chatbot that you could talk to directly, except it's being filtered through a person for the glory of that person, is infuriating enough to want out of the open source game entirely. Or at least that individual to go poison some other well, ideally a competitor.

h4ck_th3_pl4n3t|4 months ago

> If all you do is 'review' the output of your prompting before cutting a CR, I'd prefer you just send the prompt.

$$$ trillion dollar startup idea $$$

3uler|4 months ago

But then they’ve not reviewed it themselves?

CjHuber|4 months ago

I mean I totally get what you are saying about pull requests that are secretly AI generated.

But otherwise, writing code with LLM‘s is more than just the prompt. You have to feed it the right context, maybe discuss things with it first so it gets it and then you iterate with it.

So if someone has done the effort and verified the result like it‘s their own code, and if it actually works like they intended, what’s wrong with sending a PR?

I mean if you then find something to improve while doing the review, it’s still very useful to say so. If someone is using LLMs to code seriously and not just to vibecode a blackbox, this feedback is still as valuable as before, because at least for me, if I knew about the better way of doing something I would have iterated further and implemented it or have it implemented.

So I don‘t see how suddenly the experience transfer is gone. Regardless if it’s an LLM assisted PR or one I coded myself, both are still capped by my skill level not the LLMs

agentultra|4 months ago

Nice in theory, hard in practice.

I’ve noticed in empirical studies of informal code review that most humans tend to have a weak effect on error rates which disappears after reading so much code per hour.

Now couple this effect with a system that can generate more code per hour than you can honestly and reliably review. It’s not a good combination.