(no title)
emeraldd | 2 months ago
In most of the repos I work with, it tends to make a large number of false positive or inappropriate suggestions that are just plain wrong for the code base in question. Sometimes these might be ok in some settings, but are generally just wrong. About 1 in every 10~20 comments is actually useful or something novel that hasn't been caught elsewhere etc. The net effect is that the AI reviewer we're effectively forced to use is just noise that get's ignored because it's so wrong so often.
fusslo|2 months ago
He'd make giant, 100+ file changes, 1000+ worded PRs. Impossible to review. eventually he just modified the permissions to require a single approval, approves his changes and merges. This is still going on, but he's isolated to repos he made himself
He'd copy/paste the output from AI on other people's reviews. Often they were false positives or open ended questions. So he automated his side, but doubled or tripled the work of the person requesting the review. not to mention the ai's comments were 100-300 words with formatting and emojis.
The contractors refused to address any comments made by him. Some felt it was massively disrespectful as they put tons of time and effort into their changes and he can't even bother to read it himself.
It got to the CTO. And AI reviews have been banned.
But it HAS helped the one Jr guy on the team prepare for reviews and understand review comments better. It's also helped us write better comments, since I and some others can be really bad at explaining something
Smaug123|2 months ago
syntheticcdo|2 months ago
Smaug123|2 months ago
CuriouslyC|2 months ago
Smaug123|2 months ago
insin|2 months ago