top | item 25330182

How to Make Your Code Reviewer Fall in Love with You

472 points| kiyanwang | 5 years ago |mtlynch.io

169 comments

order
[+] aequitas|5 years ago|reply
> The golden rule: value your reviewer’s time

This goes the other way around too. I've had some PR's that put copious amounts of time in to make them as complete and perfect as I could for the reviewer. Only to get them turned down for style reasons.

If style is that important, make sure you document the rules or automate it (3. Automate the easy stuff). So contributors have a chance at figuring this out for themselves, without having to go back and forth. If the style comes down to personal preference you can apply rule 12. Award all ties to your reviewer, in reverse too.

Nothing takes my motivation for contributing away as a reviewer that just must find something to complain about the code.

[+] terrortrain|5 years ago|reply
This list is awesome, i love everything except:

12. Award all ties to your reviewer

A true "tie" is really rare IMO. If it really is a tie, we should all should agree that person putting work into it should be able to decide their own code. Not only does it cut down on discussion, but it cuts down on work and communication needed to get the PR past the finish line.

Allowing ties to go to the reviewer can lead to all kinds of discussions about meaningless stuff, like:

``` We should use forEach(...) instead of for loops, because that is how a most of other code is written. ```

In that example (from a real code review), the code is practically the same, just different style. So now the coder has to go back, re-write, re-test, signal for a re-review, re-discuss etc... All for something that is a tie, and doesn't actually matter either way.

Alternatively, letting non-issues through without a duel every time, saves a bunch of time and energy on every ones part.

[+] mumblemumble|5 years ago|reply
This is one spot where I strongly prefer the standard from the Google guidelines[1]. "Reviewers should not require the author to polish every tiny piece of a CL before granting approval. Rather, the reviewer should balance out the need to make forward progress compared to the importance of the changes they are suggesting." Getting caught up in all the little details rarely serves this purpose; more likely it's being done in the service of something anti-productive like a personality conflict.

[1]: https://google.github.io/eng-practices/review/reviewer/stand...

[+] vonseel|5 years ago|reply
My last boss had a problem with this. Guy literally told me to just accept any change suggested to ease the review process and it's not worth debating any comments. We had very different opinions.

He always claimed he was "not nit-picky", but then he let reviews turn into exactly what you said, while loops versus for loops and what-not, and allowed the rest of the team to comment on low-value things like style rather than answering the important questions.

Why pushback at using a built-in function that does the same thing as some little helper your team wrote 8 years ago, when your helper function doesn't even handle special cases? There's literally no point other than someone has NIH syndrome. Why are the source files all 5,000 lines long and not broken into a logical project structure? Why is the back-end data-gathering and transforming code inter-mangled with the front-end templates (Django) so far as to be written as template filters and template functions?

It did not take me long to realize the development culture there was not aligned with my values. The lead developers came up in a different world and refused to see the positives and benefits of any modern development workflow involving a separate back-end API and front-end application.

[+] microtherion|5 years ago|reply
I was going to say something similar.

When I'm the reviewer, I try to decided for myself whether something is truly important, or just a style choice. If it's the latter, I will comment on it, but then add "Your call." to make it clear that I expect ties to go to the author.

[+] BerislavLopac|5 years ago|reply
My preferred approach is: the ties are awarded to the individual whose name will show up in `git blame` after this change is merged.
[+] itake|5 years ago|reply
> agree that person putting work into it should be able to decide their own code.

I take the opposite of this approach. If my reviewer felt it was worth their time to put in the work to provide feedback, then I accept their opinion. 99.99% pr feedback is not worth escalating to a ping-pong or face to face meeting.

[+] thelean12|5 years ago|reply
I mean, consistent code style is just good practice, so maybe that's not a great example.
[+] wool_gather|5 years ago|reply
Agree; rule #1 of reviewing in my opinion is "trust your colleague whose code you are reviewing".
[+] jakevoytko|5 years ago|reply
This is an amazing list. I just want to point out that there's an implicit requirement in this list, which is that this is optimized for reviewers who are willing to give you quick feedback.

At a previous job, I worked on a team where we iterated towards the list that OP provides. It worked really well, and it worked because we had a culture of providing rapid reviews. The senior people on our team even thought that reviews were important enough to drop their own work. A lot about the team was suboptimal, but I was spoiled by getting such rapid feedback for my work.

We had some partner teams that really challenged this. One was halfway around the world, so you were guaranteed to only get one review round per day. If you wanted to make progress while working with them, you needed to either send them huge changes or send them a bunch of changes at once. But structural feedback in an early change would cause you to need to rewrite the later changes. We broke this cycle by having them eventually give local reviewers stronger conditional approval in the integration path, which fixed our particular issue but showed that there were cracks in how we optimized our code review flow.

The real trouble came from a local team, which was an overloaded platform team that we were built on. No matter what you did, they would only review your changes about once per week. Their feedback was super insightful, too, and they often caught very subtle misuses of their platform that would cause subtle user-visible bugs. You couldn't just skip their feedback. To try to manage their flow, they wanted teams to review pull requests locally before sending changes to them. So people on my team would write a change for this platform, which we'd review on our team. Our team would apply our rules and break the change up. We'd send the first one out. Then it would take a full month to get the entire thing in, to account for the back-and-forth. If you needed to get something substantial in, it could take months. My manager fought for months for them to acknowledge that this was a problem and fix it, and they did eventually dig out of it by setting SLAs and increasing capacity.

I don't want to take away from this list, but I want to point out that some of it is a function of your reviewer's incentives. If your reviewers have different incentives, then you need to find your own list that works within those bounds.

[+] mumblemumble|5 years ago|reply
I'm curious if a separate "design review" phase could have helped with your challenges with the team halfway around the world?

I worked at one company where, due to the nature of the business, you really couldn't reasonably expect other teams to do code review in a timely manner. But most inter-team code review challenges had to do with high-level design, interface protocols, stuff like that. And they had to do with tasks that we could usually see coming down the pipe weeks ahead of time. So we developed a culture of just finding a time that worked for everyone to call a meeting with one or two stakeholders from each affected team, in order to hash out those design decisions before starting on the actual implementation work. It would stay pretty high-level - if any concrete work product came out of it, it would be an ADR or two.

We did code review, too, of course. But agreeing on a plan ahead of time greatly reduced the risk of code review turning up problems that led to costly rework.

[+] p5a0u9l|5 years ago|reply
those are fascinating, occasionally horrifying cross team examples you give. i suppose i’m spoiled to work in a small company where reviews are always local.
[+] falcor84|5 years ago|reply
Great post. This is the part I liked best:

>When your reviewer makes a suggestion, and you each have roughly equal evidence to support your position, defer to your reviewer. Between the two of you, they have a better perspective on what it’s like to read this code fresh.

[+] modernerd|5 years ago|reply
Also:

Add a “How to Test” heading with steps to your PR templates.

This is not a replacement for automated tests or intended to shortcut a thorough review — it just saves a lot of time to give reviewers a list of setup and testing steps for anything they need to verify manually.

[+] mLuby|5 years ago|reply
Before/after screenshots help too, if there's a visual change.
[+] itsmemattchung|5 years ago|reply
Nice post. I agree with almost all of the author's points. However, I would add that "reviewing one's own code" is more difficult than it may appears. It's like reviewing your own prose, one develops blind spots. That being said, combing through your own CR for trivial mistakes is just due diligence.

Also, 100% on the change list description. I mentioned the same thing in a similar post[1] of mine.

[1] https://blog.mattchung.me/2020/11/09/3-tips-on-getting-eye-b...

[+] Cthulhu_|5 years ago|reply
While true, sitting down and reading your code in the review tool as opposed to your editor or git client does help; shifting your mindset from author to reviewer also helps.

Back when I was still in a project that did code reviews, it really helped for me.

Similarly / a tip: `git add -p`, to view and review your changes one at a time. You can split up changes into separate commits as well using that technique.

[+] mannykannot|5 years ago|reply
Naturally; if self-review were as effective as peer-review, we need not do the latter!

The implication here seems to be that one is only likely to be able to see 'trivial' mistakes in one's own work; that, unfortunately, tends to be a self-fulfilling prophesy, as, if you are focusing on issues like syntax, you are probably not giving so much attention to more analytic issues such as what implicit assumptions need to hold for the code to work. One thing, that is not uncommon in my experience, is for the early part of your work to be predicated on assumptions about what the as-yet unwritten parts will need or do, and some of these assumptions turn out to be wrong. There is some hope that, on revisiting the early work with an analytic eye, after completion if the remainder, you will see this to be so.

There are a couple of things that helped me improve how well I review my own work, both mentioned in the article: take a break between writing and reviewing, and don't make excuses (to yourself or publicly) or otherwise minimize problems found by the reviewer (the same goes for anything you found in testing.)

[+] p5a0u9l|5 years ago|reply
i will often read through my changes in the browser UI in a similar manner as my reviewers. more often than not, that change of perspective will highlight something silly i missed.

i realize now that i could help my reviewers by first submitting a PR to myself into the branch to be merged.

[+] BurningFrog|5 years ago|reply
I've developed the habit of trying to look at my code (or UI) as if I had never seen it before. You can think of it as an empathy exercise.

One way is imagining you want to explain it to a coworker or new hire.

I find it enormously useful, and it has become second nature for me.

[+] radihuq|5 years ago|reply
How do you guys handle managing multiple merge/pull requests that depend on the previous one?

For example:

Request 1: install dependencies

Request 2: add scaffolding to application like adding a menu item, page layout, etc

Request 3: add feature A

Request 4: add feature B

My questions:

1. Do you add all 4 Request reviews at once, or do you wait until the previous one has been approved before adding the next one?

2. How do you communicate Request 2 is dependent on Request 1?

3. How do you enable the reviewer to review Request 4 without reviewing duplicate code (since Request 4 has merged Request 1, 2, and 3)?

[+] q3k|5 years ago|reply
4 CRs stacked on eachother in Gerrit. It's that one killer feature of Gerrit that you cannot live without in other systems once you realize how powerful it is. Phabricator's Differential also does this, but it's comparatively extremely janky.

You literally just push four stacked commits to refs/for/master, they end up as 4 separate Change Requests (equivalent to what others call Pull Requests, each with a separate review process), each one explicitly marked as being dependent on another, and showing a diff only against that particular other base CR as if it were merged onto the target branch already.

Practically this just means you push your current HEAD to refs/for/master, Gerrit will take care of automatically creating all CRs for you and print out their URLs. Done. Ready for review.

Then, to update any of them, just amend them locally (address code reviews, rebase stack on top of current master/main, ...), make sure they're still stacked on eachother in Git (as four commits), and push them all again to the same endpoint. Gerrit will automatically update all affected CRs. It will deduce the intended order and state of CRs based on the local Git history pushed, mapping all commits to existing CRs based on the Change-Id present in each commit (automatically generated by Gerrit's local git hook).

None of the hacks that other siblings comments mention are required. It just works.

[+] eat_veggies|5 years ago|reply
This is a hard problem and I usually try to avoid it, but when I have to do it I usually have them branch off each other so that I can merge the PRs in reverse order once they're approved (4 -> 3 -> 2 -> 1 -> main). The diffs are all based off the previous branch in the "stack" so they aren't duplicated.

It's really easy to mess up and merge the wrong PR first, and it's a pain in the ass to rebase all the way back up if you make a change in Request 1. I'd love to know how other people are handling this.

[+] jasonpeacock|5 years ago|reply
Use branches. Branches are cheap in Git, and you can do PRs between branches so that the reviewers are only seeing the new changes in that branch.

In your example, I'd create a branch for R1, and publish a PR for R1 -> mainline when it's ready. Then I create a branch for R2 from the R1 branch, using R1 as the upstream.

When R2 is ready, I publish a PR for R2 -> R1, and create a branch for R3 to start working on it.

When there is feedback for R1, I switch to that branch and make those changes. I'll squash those changes back into the original commit(s) of R1, and update the PR.

Then you rebase R2 with the changes you made from R1, rebase R3 with the changes from R2, etc.

As long as the changes between each branch are not on the same lines of code, the rebases are painless and quick.

I name the branches with a key to support ordering, something like:

1_add_dependencies 2_add_scaffolding 3_feature_A 4_feature_B

This way they show up sorted when you list branches, and it's obvious what their dependency ordering is.

[+] onorton|5 years ago|reply
To add to the sibling comments, you should avoid this scenario in the first place by hiding things behind feature flags that only apply in dev.

Request 1 can go in fine on its own and reviewed separately

Request 2 can be added feature flagged.

Requests 3 and 4 would remove the respective flags for the scaffolding.

[+] cbhl|5 years ago|reply
Ideally you want tooling that allows you to toggle between the diff between 2 and 3 and HEAD and 3, and to specify that 2 is the parent of 3.

Some folks use "stacked diffs" or "stacked commits" to do this. Others use feature branches, or a patch series.

[+] leipert|5 years ago|reply
Create request-1 targeting the default branch, request-2 targeting request-1, etc. Once request-1 is merged, I'd change the target of request-2 to the default branch and merge the branch back. This is under the assumption that e.g. you can feature flag the feature away, so that e.g. the scaffolded app doesn't show in production yet. An alternative would be to have a "temporary" feature branch that you target until you feel it's ready to be merged into the default branch.

By (temporarly) targeting the prev branch problem (3) is solved in most UIs.

The communication problem (2) is simply done by: This adds X and is scoped to X, look at request-Y to see how feature Y builds on it.

> Do you add all 4 Request reviews at once, or do you wait until the previous one has been approved before adding the next one?

This is the tricky bit, in your example I wouldn't go so far and create all MRs. I'd create one MR ahead at maximum, because if you need to change something based on the review that influences a later feature, you might end up doing work twice.

So maybe a timeline would look like this:

- (Skipping a dependency MR, because I feel one should introduce dependency when it is used and not beforehand)

- request-2: Scaffold application / feature, create an MR targeting default branch (request-2)

- request-3: Start working on feature A, once feature A is 80% done => Create an MR targeting request-1

- Ask for review on request-2 with sharing a link to request-3

- Continue working on request-3 while you wait on request-2 review

- Incorporate request-2 review feedback, get it merged

- Rebase request-3 on merged request-2 (or merge default branch back), resolve conflicts and put it into review

- Start working on request-4 based on the current request-2 branch.

- Rinse and repeat.

There are obvious intricacies, e.g. if legal needs to review dependencies, review times of your reviewers. I generally try to work with the same set of reviewers over a sensible time frame, rather than getting new ones for every MR relating to the same feature set. So check beforehand whether they are available for the foreseeable future.

[+] mtlynch|5 years ago|reply
Author here. Happy to answer any questions or take any feedback about this post.
[+] sixhobbits|5 years ago|reply
Hey, meta comment since I found this post first through your meta post / tweets about trying to get this to the front page of HN.

It's great content, and exactly the kind of things I want to read here, but wondered if you had thoughts about how to solve the Goodhart's law[0] effect. e.g. in an ideal world for me (and I guess for you too), the best posts would more naturally rise to the top of places like HN that ostensibly act as quality filters.

I really like your posting advice in general as most of it is about 'how to improve quality', but some of it (choosing what time to post at etc) falls into 'how to improve my chances at this gambling like system'.

I have mixed feelings about this. Having a higher barrier to entry does lead to higher quality posts (and in one possible future, only people who have read the 'how to get to front page of HN' guide can get to the front page of HN). On the other hand, I've seen the problem of 'should we invest this dollar into producing a better quality Widget or into telling people more effectively that our Widgets are the best?' kill entire organizations when the see-saw tilts too much to the latter.

Maybe I'm relating some things too broadly and it's actually nothing more than 'hey I post high quality content and do things to help it succeed', but wondering if you've thought about this tradeoff at all / if you're worried about getting deeper into the metrics rabbit hole that you've discussed in some of your other posts.

[0] https://en.wikipedia.org/wiki/Goodhart%27s_law

[+] foobarian|5 years ago|reply
Wanted to add one thing I noticed after I've been on the reviewing side for a while. Add a section in the description of the MR describing how the changes were tested. If this is detailed enough and covers all the edge cases I worry about, sometimes I don't even need to look at the code to be comfortable with approving the change. Conversely, it's frustrating when I get a MR with a blank or generic description without any notes on purpose, approach, how it was tested, etc.
[+] mannykannot|5 years ago|reply
This is an excellent article, making several points that I would not have thought of if I were to write on the same topic (point 11, for instance.)

I would, however, like to make a point about commenting, as there are at least three related cases where renaming identifiers and restructuring logic is not sufficient. The first thing is where one is doing something non-obvious on account of an unintuitive (and possibly undocumented) fact about either the problem domain or some remote part of the the system that it is infeasible to change; the second is some non-obvious corner-case (a subtle issue of thread- or exception-safety, for example); and the third is where the algorithm itself is non-obvious (Jon Bentley wrote of a colleage who found that one of his algorithms had beem 'fixed' and 'improved' to the point where it was no longer efficient; a suitable comment might have avoided that (and there's also the famous /* You are not expected to understand this */, for which no mere restructuring or renaming seems to offer an effective alternative.))

I am sure you are aware of these exceptions, but not everyone on the comment-deprecating bandwagon seem to do so.

https://en.wikipedia.org/wiki/Lions%27_Commentary_on_UNIX_6t...

[+] airstrike|5 years ago|reply
First I just wanted to say I loved the doggo and the general tone of the post.

Also wanted to say that there's a lot of parallel between code reviews and "work review" in Investment Banking. Tooling aside, the underlying concept behind your tips is likely true for a variety of sectors, so I do wonder if one could dig a level deeper and find that essential truth in how to properly manage write-review workflows. Just some food for thought in case you want to jump into that rabbit hole.

[+] whoisjohnkid|5 years ago|reply
Love the concept of reviewing your own code first. I’ve been doing that for the longest now and usually find something I missed when reviewing my own code. I think I have taken this for granted and assumed everyone does this.
[+] alisonkisk|5 years ago|reply
"fall in love with you" is catchy but not the right interpersonal dynamic. Something like "love reviewing your code" is better.
[+] protastus|5 years ago|reply
This is good guidance to junior devs. I see these issues frequently from the reviewer side.

The title is super creepy though, so is the claim that "your reviewer will literally fall in love with you". I review code from my team every day, including code written by young engineers, male and female. I can't point them to this.

[+] lllr_finger|5 years ago|reply
This is great! Many of these things are slowly learned over time, but explicitly listing them out will be helpful when refreshing or teaching others.

The three that have been most helpful on our team are: #1 (review your own code first), #2 (write a changelist description), and #5/#6 (narrowly scope changes/separate functional/non-functional changes).

[+] stiray|5 years ago|reply
Well, as much I have seen code reviews in my life (ignoring a few people that have OCD), I have noticed that the trend is to:

- add whatever comment that comes to your mind and you know wont be argued about (like using spaces instead of tabs) to have your persona look better on management statistics

- not deal with actually understanding the code but rather argue about syntax (dont malloc, use vector.reserve(),...)

- dont press accept until few lines are exchanged... again, boosting the statistics instead of giving something meaningful

- ... whatever boosts the reviewer visibility, even going ballistic on irrelevant parts of code (while no one reviews 3rd party dependencies where most of the evil is)

My proposal for code quality is different. Put together a team that communicates (a rare skill nowdays) and LET THEM communicate, offer them free beer afterwork, whatever interests them. Dont use enforced meetings like scrum etc., they just show that you have failed putting together a functional team. Make developers friends instead of rivals. Then avoid JIRA overhead, avoid management visibility,... just listen what people are saying (and preferably become their friend too), surpres any outside world influeces from your team and fire anyone trying to ladder over anyone else. Then you wont need code reviews. And you will always have a long list of improvements at any moment. Just send an email and ask.

Be capable of coding. Be really good at coding. Everyone will follow you after "shit hit the fan" and team had to work over weekend and you have stayed with them and help them fix the bugs, analyze the code. DONT be a manager, be helpful part of the team (if this would be industry standard we can just fire 99.999% of managers).

[+] ensiferum|5 years ago|reply
- Anyone focusing on style issues in a code review is doing it wrong and really just wasting each others time.

- self documenting code isn't possible because the code cannot possibly document what isn't there. In other words sometimes the issue important details are in what the code isn't doing (such as not setting some state or not making that function call under some circumstances etc.

[+] nullandvoid|5 years ago|reply
A couple from my pet peeve list -

1) ATOMIC COMMITS

Nothing worse than a single commit, to address 10 separate comments. I don't want to have to go back, and cross reference my comments with your wall of changes

2) DO NOT SELF RESOLVE A COMMENT

Wait for the reviewer to checkout your solution, I want to learn through your fix, aswell as make sure it's what I expected.

[+] mtlynch|5 years ago|reply
Agree on (2), but I respectfully disagree with (1).

It adds a lot of friction for the author to have to make 10 separate commits to address 10 comments. I also don't want to review 10 separate commits.

The majority of my comments are on separate chunks of code, so it's not that hard for me to see the resolutions to all of them in one big diff.

Edit: I'm assuming the commits will be squashed at the end of the review. If you're preserving the full commit history, then I can understand why you'd want the atomic commits for each note addressed.

[+] colonwqbang|5 years ago|reply
Do you make an additional commit for each review comment? As opposed to just amending the old commit. Why?
[+] detaro|5 years ago|reply
tbh sounds to me as if you review way to large changes if there's 10 comments requesting changes that are not clearly attached to a small code location (and thus can be reviewed through a line/section specific comment, and thus easy to see the individual change)
[+] luord|5 years ago|reply
This is some great advice, although I'm not completely sold on number 8 and number 9.

For number eight: If your reviewer is being a douche, I don't think there's harm in pointing it out, respectfully. Nothing says that both sides can't learn from the review.

As for number nine. I'm not sure if adding comments is going to completely help; I'd rather refactor to make the code as explicit as possible, including the tests[1]. But more importantly, I don't think that sacrificing possibly more idiomatic code for the sake of beginners is a great trade-off. I think it's better to encourage people to get more familiar with the language and technology being used.

[1]: Which is usually my first stop for when I might not understand something in the code itself.

[+] BerislavLopac|5 years ago|reply
Number 13 inspires a question: I wonder if there is any code review tool which will tell how many other (open) reviews a potential reviewer is on when you're about to add them to yours. I haven't seen it in any tools, and it would be a nice addition...
[+] jsnell|5 years ago|reply
Great list, but I'd argue that point 6 is more often indicative of tooling failure. Why is it hard to review something that changes both formatting and the code? In the example it is because the tooling is visualizing both kinds of changes in the same way. But there is no fundamental reason for why that needs to be the case.

The same applies to refactoring. It seems perfectly doable for a code review tool to detect code motion, and have differet treatment of the code that moved unchanged vs. the parts that were modified.

[+] hashkb|5 years ago|reply
Just complain to management that your reviewer is being mean and that you need to get this feature deployed right away for growth. Promise to monitor closely in production and go home.
[+] perlgeek|5 years ago|reply
A bit of a controversial one: give your reviewer one obvious (but minor) mistake to catch. For some reason, it feels really unsatisfying to a lot of people to conduct an extensive code review, and come up with exactly zero items to change.

This is likely more relevant when you present something to a business leader who has the feeling they have to contribute something, but it can also apply to inexperienced code reviewers.

[+] pwinnski|5 years ago|reply
I find myself nodding in agreement with points 6 and 7 (about mixed and large changelists), but unsure of how to implement the suggested fix.

Given that I don't control the tools used, how do I ensure that functional changes aren't lost amidst formatting changes? I would love for BitBucket to ignore formatting changes in my PRs, but alas, no.