(no title)
ohans | 2 months ago
As for PR reviews, assuming you've got linting and static analysis out the way, you'd need to enter a sufficiently reasonable prompt to truly catch problems or surface reviews that match your standard and not generic AI comments.
My company uses some automatic AI PR review bots, and they annoy me more than they help. Lots of useless comments
visarga|2 months ago
My experience is you can trust any code that is well tested, human or AI generated. And you cannot trust any code that is not well tested (what I call "vibe tested"). But some constraints need to be in natural language, and for that you need a LLM to review the PRs. This combination of code tests and LLM review should be able to ensure reliable AI coding. If it does not, iterate on your PR rules and on tests.
unknown|2 months ago
[deleted]
hrpnk|2 months ago
> My company uses some automatic AI PR review bots, and they annoy me more than they help. Lots of useless comments
One way to make them more useful is to ask to list the topN problems found in the change set.
MYEUHD|2 months ago
You can also append ".patch" and get a more useful output