(no title)
coef2
|
1 year ago
I have a conundrum about this. If an LLM can learn our codebase and generate reasonable reviews, does this imply it could perform the work independently without us? Perhaps generating code and conducting code reviews are distinct tasks. Another related question is: for complex tasks that generative AI can't solve, could this service still provide somewhat meaningful reviews? Maybe it could be partially useful for certain subtasks like catching off-by-one errors.
Jet_Xu|1 year ago
LlamaPReview works best at: - Spotting potential issues (like off-by-one errors) - Identifying patterns across the codebase - Maintaining coding standards
For complex architectural decisions, it serves as an assistant rather than a replacement - helping senior developers save their time to focus their attention where it matters most.