It seems like a decent guide, though like others mentioned, you should download and run the code most of the time, except for trivial changes.
Also, I'd like to push back on this:
> Most developers have a lot of features to add and bugs to write, time spent reviewing other people’s work does not add a lot to your bottom line.
I disagree; it adds a lot to your bottom line. In fact, reading and reviewing code is one of the main tasks of a developer, perhaps more important than writing code.
Reviewing code gives you an opportunity to get familiar with other parts of the code base, understand why a change or fix is done, and collaborate with others to make an overall better improvement to the product.
It gets frustrating when egos get involved, and people take criticism too personally, or point out improvements for the sake of demonstrating their knowledge. In best working environments, code is judged without a bias or relation to who produced it, but as an inert output of the collective team. I would actually like to see a feature in code review tools to submit merge requests anonymously, just so that the author isn't a prevalent factor while reviewing. :)
I usually dedicate a few hours, maybe even half a day, to reviews. Definitely don't look at it as a chore to put off, but as a crucial part of your responsibilities as a developer.
I went ~25yrs into my coding career without code review and without tests. I wrote games, several AAA games, play testers found bugs, but otherwise there was no code based tests and no code review, none, zero, zilch. From 1983 to around 2008.
Now I use tests and code review and I like them. I like that reviewers catch my bugs. I like that they suggest better solutions I didn't think of. I like that they tell me about functions I could use that make the code similar. I like that they help explain the code.
But, I can't entirely dismiss that I and the teams I worked on shipped a ton of products without any code review and that my velocity feels much slower than previously. It's possible that slowness is from the projects being way more complex, the teams being much larger, and that code review is actually a perf multiplier over all for these larger more complex things I'm working. But, waiting for and/or doing reviews certainly "feels" slower than I used to be.
> It seems like a decent guide, though like others mentioned, you should download and run the code most of the time, except for trivial changes.
Actually, that's not true at all. Code review is not a replacement for automated builds and testing. That's what Continuous Integration is for. Let humans concentrate on what they do best (understanding and comment on code) and don't lose their time and patience with things that can be automated.
Projects often enforce that by running CI on code submitted for review and it cannot be accepted as long as a CI fails. It also reduces breakages for other devs of the builds of the code base.
If you read that statement as "when judged on your output by those in control of your salary" does it make more sense which bottom line is in play here?
I disagree; it adds a lot to your bottom line. In fact, reading and reviewing code is one of the main tasks of a developer, perhaps more important than writing code.
It provides the possibility of not causing future 'your business' reductions to its bottom line. That's why any 'rules' to code review are so context specific and should be applied as such.
The reality is that most of your comments will be reminding people to follow the style guide or to not organize their code like monkeys with type writers.
If you're leaving comments about style on PRs then your tooling isn't doing a good enough job. You should have a linter and a formatter configured, everyone should be sharing the same config, and they should always be run on a prepush hook (or earlier... run them on file saves if possible). Badly formatted code should make the the push fail so the dev has to fix the problem before they can open a PR. Force pushes and skipping hooks should be vigorously discouraged. That way badly formatted code shouldn't make it to the repo and your PRs can focus on more important things.
Obviously this is really hard to implement on a project that's old and has thousands of linter errors. If you didn't set it up on a greenfield project right at the start you can't really complain too much though.
I think that often it's not just coding style (yes, a linter should catch indentation errors), but coding conventions/requirements to enforce an idiomatic code structure in the code base. Linters can't catch that. E.g. how to organize files, where to put business logic, how tests are structured, which libraries to use, how to handle non-blocking or concurrent code, errors, code documentation etc.
I also blame the tooling a bit. After using Critique [1] internally at Google all the other tools/implementations/UIs feel very confusing. And I'm not speaking with lack of integration with other tools like coverage, linters, spell checks, error-prone constructs analysis, ... but simply the UI.
Additionally within Google (or at least the teams I was in) really tries to chunk changes in small pieces. For instance, adding a feature always entails something like: 1 CL to introduce the flag that will enable/ disable the feature, at least 1 CL that actually adds the logic and tests, 1 that enables the flag in the deployment. Not sure if that's due to the monorepo or more of a culture thing, but that MO is not that common in my personal experience
I agree that the tooling is very underrated! It’s strange that programmers spend so much effort choosing their code writing environment (IDE, etc) but just take what they’re given when it comes to code reviewing (mostly GitHub PRs)
If you’re missing Critique, allow me to shamelessly plug CodeApprove: https://codeapprove.com
Just like Critique, it’s a code review tool for power users that really lets you focus on resolving every discussion.
I think I disagree with the emphasis on asking questions. Not because questions shouldn't be asked, but because they should be asked much earlier in the process than the PR stage. Every feature should at a minimum have a brief doc for the requirements and high level design. If you haven't read that and are reviewing the PR for anything other than style, you need to take a step back and go read that doc first.
In my opinion, the levels are:
1. Barely-existent review: The PR author is the SME, I'm only reviewing and LGTMing this because peer review is needed to submit the change. I might notice a typo or something along those lines, and will call out if they're being lazy and skipping tests.
2. Functionality-focused change: Does the code meet the objective and do the tests adequately cover the change?
3. In-depth review: Does the code address the problem in an efficient way that is not likely to cause the team future pain?
It's interesting that the author puts "downloading the code" in the exhaustive section.
Is it really that much work to run git checkout? I routinely checkout code in PRs if I'm not familiar with that part of the codebase. I can navigate the codebase much quicker in my editor than in GitHub. Although GitHub is actually getting there with the semantic analysis of code and finding references and stuff, sometimes I just need to open 5 files in different splits to figure out what is going on.
I think another factor here is if the reviewer is working on the same code base and the project has a complex environment to set up (e.g., a library compiled with custom flags), then `git stash` may not be enough to checkout the branch and run tests. (I am unfortunately guilty of creating some projects like that myself...) Docker can help with these cases, but not all projects are docker-oriented.
I personally like running the code myself when reviewing, it's just not always practical.
On bad projects (which happens often now that node_modules directories are often gigabytes big), IntelliJ IDEA is super-long to index, which is required before being able to Cmd+Click on functions. And we have M1.
But this is an example where usability impacts the author’s judgement. For me, UX testing is part of the initial step of code reviews, no technical review is necessary if we end up deciding to implement the story another way.
One of my biggest pet peeves is when people get into the habit of leaving code reviews with only suggestions like variable name changes. It's not that I'm resistant to changing things like that but I have several colleagues that seem to only ever make suggestions like that and it seems very performative to me.
I had a colleague at a previous job who would, without fail leave reviews on only the implementation of the tests during a review.
Which normally I would appreciate as testing is usually neglected but they where almost always minor nitpick style suggestions or comments (which often revealed they didn't understand the business logic / code changes being made).
But as showing you proactively contributed during reviews was part of the framework for promotion they needed to show they had made suggestions rather than ask for clarification on features etc.
> Most developers have a lot of features to add and bugs to write, time spent reviewing other people’s work does not add a lot to your bottom line.
Does not add a lot to your bottom line? This is the mindset of isolated individual contributors rather than members of a single team that work on a given feature of the same product. Isn't this precisely what various agile approaches (scrum or XP in particular) are fighting against?
> This is the mindset of isolated individual contributors rather than members of a single team that work on a given feature of the same product.
From experience, I think this is the more common scenario, unfortunately.
> Isn't this precisely what various agile approaches (scrum or XP in particular) are fighting against?
Maybe that was the ambition, but I don't see this borne out in practice. If anything, Agile as most commonly practiced exacerbates this problem. Of course highly paid Agile consultants will dig their heels in and say "You're not doing Agile!", but that doesn't really change the reality for the people on the ground.
and if you do do it (speaking from experience) people get offended that someone be looking at their code (as if existence in source control guaranteed absence of bugs...). It is all the more surprising since simple inspection of never-reviewed code usually has a lot of low-hanging fruit in terms of bugs.
[+] [-] imiric|3 years ago|reply
Also, I'd like to push back on this:
> Most developers have a lot of features to add and bugs to write, time spent reviewing other people’s work does not add a lot to your bottom line.
I disagree; it adds a lot to your bottom line. In fact, reading and reviewing code is one of the main tasks of a developer, perhaps more important than writing code.
Reviewing code gives you an opportunity to get familiar with other parts of the code base, understand why a change or fix is done, and collaborate with others to make an overall better improvement to the product.
It gets frustrating when egos get involved, and people take criticism too personally, or point out improvements for the sake of demonstrating their knowledge. In best working environments, code is judged without a bias or relation to who produced it, but as an inert output of the collective team. I would actually like to see a feature in code review tools to submit merge requests anonymously, just so that the author isn't a prevalent factor while reviewing. :)
I usually dedicate a few hours, maybe even half a day, to reviews. Definitely don't look at it as a chore to put off, but as a crucial part of your responsibilities as a developer.
[+] [-] gernb|3 years ago|reply
Now I use tests and code review and I like them. I like that reviewers catch my bugs. I like that they suggest better solutions I didn't think of. I like that they tell me about functions I could use that make the code similar. I like that they help explain the code.
But, I can't entirely dismiss that I and the teams I worked on shipped a ton of products without any code review and that my velocity feels much slower than previously. It's possible that slowness is from the projects being way more complex, the teams being much larger, and that code review is actually a perf multiplier over all for these larger more complex things I'm working. But, waiting for and/or doing reviews certainly "feels" slower than I used to be.
[+] [-] feb|3 years ago|reply
Actually, that's not true at all. Code review is not a replacement for automated builds and testing. That's what Continuous Integration is for. Let humans concentrate on what they do best (understanding and comment on code) and don't lose their time and patience with things that can be automated.
Projects often enforce that by running CI on code submitted for review and it cannot be accepted as long as a CI fails. It also reduces breakages for other devs of the builds of the code base.
[+] [-] joshka|3 years ago|reply
[+] [-] weego|3 years ago|reply
It provides the possibility of not causing future 'your business' reductions to its bottom line. That's why any 'rules' to code review are so context specific and should be applied as such.
[+] [-] onion2k|3 years ago|reply
If you're leaving comments about style on PRs then your tooling isn't doing a good enough job. You should have a linter and a formatter configured, everyone should be sharing the same config, and they should always be run on a prepush hook (or earlier... run them on file saves if possible). Badly formatted code should make the the push fail so the dev has to fix the problem before they can open a PR. Force pushes and skipping hooks should be vigorously discouraged. That way badly formatted code shouldn't make it to the repo and your PRs can focus on more important things.
Obviously this is really hard to implement on a project that's old and has thousands of linter errors. If you didn't set it up on a greenfield project right at the start you can't really complain too much though.
[+] [-] gombosg|3 years ago|reply
[+] [-] allset_|3 years ago|reply
[+] [-] DangitBobby|3 years ago|reply
I'd rather people check in their WIP than have it sitting on their machine. Let GH actions worry about whether the code is formatted correctly.
[+] [-] marcyb5st|3 years ago|reply
Additionally within Google (or at least the teams I was in) really tries to chunk changes in small pieces. For instance, adding a feature always entails something like: 1 CL to introduce the flag that will enable/ disable the feature, at least 1 CL that actually adds the logic and tests, 1 that enables the flag in the deployment. Not sure if that's due to the monorepo or more of a culture thing, but that MO is not that common in my personal experience
[1] https://abseil.io/resources/swe-book/html/ch19.html
[+] [-] codeapprove|3 years ago|reply
If you’re missing Critique, allow me to shamelessly plug CodeApprove: https://codeapprove.com
Just like Critique, it’s a code review tool for power users that really lets you focus on resolving every discussion.
[+] [-] dom111|3 years ago|reply
[+] [-] allset_|3 years ago|reply
In my opinion, the levels are:
1. Barely-existent review: The PR author is the SME, I'm only reviewing and LGTMing this because peer review is needed to submit the change. I might notice a typo or something along those lines, and will call out if they're being lazy and skipping tests.
2. Functionality-focused change: Does the code meet the objective and do the tests adequately cover the change?
3. In-depth review: Does the code address the problem in an efficient way that is not likely to cause the team future pain?
[+] [-] doix|3 years ago|reply
Is it really that much work to run git checkout? I routinely checkout code in PRs if I'm not familiar with that part of the codebase. I can navigate the codebase much quicker in my editor than in GitHub. Although GitHub is actually getting there with the semantic analysis of code and finding references and stuff, sometimes I just need to open 5 files in different splits to figure out what is going on.
[+] [-] noam_k|3 years ago|reply
I personally like running the code myself when reviewing, it's just not always practical.
[+] [-] eastbound|3 years ago|reply
But this is an example where usability impacts the author’s judgement. For me, UX testing is part of the initial step of code reviews, no technical review is necessary if we end up deciding to implement the story another way.
[+] [-] infinitezest|3 years ago|reply
[+] [-] teratron27|3 years ago|reply
[+] [-] mkl95|3 years ago|reply
[+] [-] azangru|3 years ago|reply
[+] [-] azangru|3 years ago|reply
Does not add a lot to your bottom line? This is the mindset of isolated individual contributors rather than members of a single team that work on a given feature of the same product. Isn't this precisely what various agile approaches (scrum or XP in particular) are fighting against?
[+] [-] yakshaving_jgt|3 years ago|reply
From experience, I think this is the more common scenario, unfortunately.
> Isn't this precisely what various agile approaches (scrum or XP in particular) are fighting against?
Maybe that was the ambition, but I don't see this borne out in practice. If anything, Agile as most commonly practiced exacerbates this problem. Of course highly paid Agile consultants will dig their heels in and say "You're not doing Agile!", but that doesn't really change the reality for the people on the ground.
[+] [-] extasia|3 years ago|reply
[+] [-] pp3355|3 years ago|reply
[+] [-] anshumankmr|3 years ago|reply
[+] [-] m463|3 years ago|reply
[+] [-] benniomars|3 years ago|reply
I guess something must've slipped through the code review.
[+] [-] alaroldai|3 years ago|reply
[+] [-] difflens|3 years ago|reply
[+] [-] kelsolaar|3 years ago|reply
[+] [-] revskill|3 years ago|reply
[+] [-] airocker|3 years ago|reply
[+] [-] jbn|3 years ago|reply
[+] [-] quickthrower2|3 years ago|reply
[+] [-] pp3355|3 years ago|reply
[+] [-] sergius|3 years ago|reply
[+] [-] Ava4312|3 years ago|reply
[deleted]