A particular type of complexity is over-engineering, where developers have made the code more generic than
it needs to be, or added functionality that isn’t presently needed by the system. Reviewers should be
especially vigilant about over-engineering. Encourage developers to solve the problem they know needs to be
solved now, not the problem that the developer speculates might need to be solved in the future. The future
problem should be solved once it arrives and you can see its actual shape and requirements in the physical
universe.
Note how Google does NOT say "make sure the code is properly architected". Instead they say "make sure the code is not over-engineered"! At top companies like Google, projects rarely fail because there isn't enough architecture. Instead projects end up costing 10x to 30x because of unnecessary complexity. Over-engineering is a trap that very good developers fall into all too often. I am glad to see Google has cautioned against this, because I can now point my fellow developers to this when they are about to fall into the same trap!
> A particular type of complexity is over-engineering, where developers have made the code more generic than it needs to be, or added functionality that isn’t presently needed by the system. Reviewers should be especially vigilant about over-engineering. Encourage developers to solve the problem they know needs to be solved now, not the problem that the developer speculates might need to be solved in the future. The future problem should be solved once it arrives and you can see its actual shape and requirements in the physical universe.
Preface: I was also very glad to see this called out specifically, and think it's a great rule.
That said...
> Note how Google does NOT say "make sure the code is properly architected".
is not accurate. The very first paragraph on the same page is:
> Design
> The most important thing to cover in a review is the overall design of the CL. Do the interactions of various pieces of code in the CL make sense? Does this change belong in your codebase, or in a library? Does it integrate well with the rest of your system? Is now a good time to add this functionality?
Emphasis mine. But that sure sounds like they care about proper architecture. They just also care about avoiding over-engineering. They're not mutually exclusive.
It sounds like walking a tight line between over engineering, and falling into technical debt. If you design code specifically solves the immediate need, that may need to be thrown away or extensively worked on / around when future needs come up. On the other hand, you can write code that solves future needs that never appear, and still fail to solve the actual needs that end up appearing.
For me, I would rather put a bit of additional effort up front gathering enough information about the direction of the project or usage of it to better formulate an attack plan so that it does actually become more future proof. But even that is subject to failure, so what do you do?
Does Google really practice what they preach though?
Just recently I was looking at Angular, Google's web frontend framework. Services, modules, directives, angular-specific markup. Coming from React, I find this grossly over-engineered.
Leave the code in a state where it's understandable.
ie don't over specify - not just simple things like interfaces, but also things like premature decomposition, that's exposed at the higher level for configurability.
While on the face of it, having everything as small interacting functions means to change any function is quite easy, changing overall behaviour might be very hard - hard to understand all the moving parts, and even worse, if the code split is wrong for future needs, very hard to change.
I've been pushing the idea that if you're getting meaningful feedback on design (and over design) in your PR reviews than you've failed. That stuff should be shaken out long before you have working, complete code.
The problem comes in when you can anticipate a problem and also anticipate not being given the resources to solve the problem then when you do have those resources now.
For example, had a user who wanted a document produced in one specific format. They promised this was what they wanted and it wouldn't change. It changed 5 times as we were nearing release. So I over-engineered it to allow a template to produce the document that can easily be changed (compared to the release process for a code change). It ended up being used numerous times, despite the problem were given clearly states an unchanging template.
When you have very smart people, it may be difficult for those people to write simple stupid code. Smart people like to write delightful, sophisticated and elegant solutions that will address problems and use cases that the code will never go through. A library may be an exception.
I used to be one of them. "Growing up" I realized that striving for simplicity and adding just complexity when necessary is the ultimate sophistication.
Good engineering values and a shared vision of what "doing a good job" means helps out a lot. If we prioritize "development speed" and "quality from a user prospective", extrinsic complexity clearly becomes a burden.
Instead, if these values are not shared, the main motivation for smart people may easily become "ego" and showing off code that may look clever and supporting a lot more use cases but most likely it did not need to be written, wasting time and adding unnecessary complexity with the risk of introducing cognitive overload.
> where developers have made the code more generic than it needs to be, or added functionality that isn’t presently needed by the system.
I agree with this when it's internal interfaces. When you have public interfaces that you expect to have to support in a backwards-compatible manner for (ideally) years in the future, it's worth taking some time to think about how people might want to extend it in the future.
This only works at Google because most of the code is rewritten every few years [1]. The dynamics between tech debt and over-engineering in such a setting are not really representative of the rest of the industry.
This is great advice and isn't followed often enough, especially when reviewing code written by people new to an organization/team:
> If you see something nice in the CL, tell the developer, especially when they addressed one of your comments in a great way. Code reviews often just focus on mistakes, but they should offer encouragement and appreciation for good practices, as well. It’s sometimes even more valuable, in terms of mentoring, to tell a developer what they did right than to tell them what they did wrong.
I've seen cases where people got hundreds of comments (many of them minor, nitpicky) from more experienced developers and were discouraged by the sheer number of them. That most new developers naturally suffer from imposter syndrome is not helped at all by 100% critical code reviews.
I once worked on a team that specialized in very long littanies of code review comments... but they were able to bake this into their culture in fundamental ways such that it ended up being one of the most positive experiences in my software engineering career.
The basics of how they accomplished this was:
- The obvious- no personal / destructive attacks or insults, no cussing, no comments on any person's abilities.
- Having your code picked apart is a badge of honor, and you were expected / required to do the same to the most senior of your teammates when they submit code.
- Collective ownership- it's never "your" code, it's the team's code.
- Constant acknowledgement that our system is hard and complex, and it is really important that it works as expected / promised.
That last part was an ingredient I never found anywhere else. The opposite seems to be more common- every other team I've worked on tended to underestimate or even trivialize the difficulty and complexity of the systems they work on. By acknowledging that this work is hard for everyone involved, and that despite this, it must be well-done and function properly, the code review and the ensuing discussions became a very welcome and encouraged part of this process. It also helped defeat newcomers' imposter syndrome because this mentality was effective at making everyone feel like they had an important role to play, and that even the most senior folks often felt like a noob when they screwed up.
> I've seen cases where people got hundreds of comments (many of them minor, nitpicky) from more experienced developers and were discouraged by the sheer number of them. That most new developers naturally suffer from imposter syndrome is not helped at all by 100% critical code reviews.
This, combined with a large portion of developers lacking social empathy, poor communication skills, and (unfortunately) a desire to appear to the the smartest person in the room, can lead to severe demotivation and stress for both experienced and new team members.
Want to know a great way to handle a subset of these? Good automated code linters. Google has a ton of formatters and linters that either auto correct or suggest changes. To paraphrase a smarter Googler than me: People tend to just do what the bots say and don't care too much about. The best way to impose your code ideas on others is via a linter.
Automated tools here have also drastically reduced the number of comments I get on CLs at Google, as people don't bicker about the little things as much anymore.
Something I've started doing that I picked up was prefixing my nitpicky comments with "Nit: ..." so that it's clear that certain comments are just minor suggestions, not that anything is necessarily wrong.
I'm usually okay with preemptively accepting code with only nit comments too just to signal that those comments are not too big of an issue (if at all).
Agreed. I find this extra important when working remotely or with other people who work remotely. If you’re not getting any face-time, critique has to be balanced with positivity. In my experience it’s true even with very experienced team members, and even with the very serious & stodgy people who avoid chit-chat and claim to not be bothered by feelings & opinions.
Why is it that people feel so discouraged by loads of review, especially early on? I always had a good bit of imposter syndrome early on, but never considered quitting. I always assumed that you have a lot to learn, that it’s expected you’re going to suck at some level.
> I've seen cases where people got hundreds of comments (many of them minor, nitpicky) from more experienced developers and were discouraged by the sheer number of them.
Unfortunately many people stink at communication. The best thing a reviewer can do during a code review is ask questions. A reviewer who comes in and just says 'this is wrong' misses an opportunity to either learn themselves or actually teach the submitter. A much better approach is to ask the submitter why they did something certain way. What were their thoughts, and what did they see.
IME, asking questions leads to much better outcomes. Sometimes the submitter will just admit they were not thinking about anything and see their own errors. Upside, they found the issue themselves. Other times they'll explain some complexity or use case only they could see while deep in the code. Upside, I just learned something.
In general the best teachers ask questions. It doesn't put people's egos on the defensive, and tends to help people learn more effectively (the goal if any review IMO). I've seen this work on me in other scenarios like Jiu-Jitsu. My teacher will ask mid-roll or afterwards why I did something, no matter how stupid it was. Explaining my thought process helps me to better recognize the error, and I'm not immediately on the defensive. I'm then much more receptive when the teacher goes "that wasn't a terrible idea, but if you did this...".
Agreed. You don’t need fake praise but you should know whether something you did is just adequate or really good. You rarely have discussions about why something is good.
At Amazon, all code is reviewed. Comments are constructive and straight to the point. Standard courtesy applies, but anything more than that is left out. Unless there's something extremely interesting, overly cautious phrasing and "positive" comments are mostly noise. Also, people tend to specify when comments are nits.
I did a pretty reasonable chunk of C++ code once years ago. I was junior level. Wanted a good review.
I got 21 comments to remove blank lines and none about the code. Thanks. Really useful. This was the team of "experts" on the code base.
Then a principle engineer reviewed some other code and oh look, actual useful comments.
Nitpicking can be worse than useless. Obfuscates or ignore real issues in the code. Seems to usually be a sypmtom of the reviewing not being competent enough to actually review the code properly.
I'd add two things, from a decade of experience at Google:
Review code in this order: protocol buffers, unit tests, headers, implementation. It's common for a new employee to be an expert on C++ or Java or whatever languages but it's very uncommon to meet anyone who knows how to define a decent protocol message. The tests should give the reviewer a very clear idea of what's happening in the code (and this should be congruent with the description of the change), if not send back to author at this point. The headers should contain clear interfaces, types, and comments and should not contain anything that's not part of the API (when this is technically possible). Finally look in the CC file; at this point the reviewer should see things they were already expecting and no funny business.
Any claims about speed, efficiency, or performance whether in comments or during the review must have accompanying microbenchmarks or they should be deleted. Wrong ideas about software performance abound, and even correct beliefs become incorrect with time, in which case the microbenchmarks are critical to evaluating the continued value of the code.
When sending a changelist for review, always clear all automated warnings or errors before wasting the reviewers' time. Nobody wants to see code that doesn't build, breaks a bunch of tests, doesn't lint, etc.
Re: performance claims, well put. I often see coding decisions and review claims made on untested or outdated facts. For Java specifically three most common things I run across are:
1) Decision to use some kind of 3rd party library that purportedly speeds up handling primitive types (Trove specifically. Almost never worth it, and it's extra painful because it doesn't plug in well into the rest of the standard library).
2) Using mutable objects to avoid the inefficiencies of allocation and copying with immutable patterns. Can lead to a brittle stateful design with hard to track down bugs.
3) Reusing serialization artifacts across the call stack to save copying/allocation, again like 2). Now you end up coupling the API interface to layers deep in the call stack making things harder to modify.
Also best review advice I got and follow: be the first reviewer yourself! Looking at a nicely formatted diff really lets you see your code in a new light, and usually I end up doing several iterations before I run out of things to tweak. Things like forgetting debug log statements or commented out code, or dev settings, as well as other easy but effective improvements.
> Any claims about speed, efficiency, or performance whether in comments or during the review must have accompanying microbenchmarks or they should be deleted.
That's way too strict. If I want to suggest moving a statement that's inside a loop to the outside, I don't want to waste my time writing microbenchmarks showing that it improves performance.
If the suggestion is complicated or non-obvious, and it's in a really critical part of the code, then sure, write a benchmark. Outside of that though, it's rarely a good use of developer time.
> When sending a changelist for review, always clear all automated warnings or errors before wasting the reviewers' time. Nobody wants to see code that doesn't build, breaks a bunch of tests, doesn't lint, etc.
Generally true, but not when prototyping. As a reviewer, I would like to see the general idea before spending time on cleaning up. If the grand design is wrong, time spent on cleaning up would be a waste.
Those are an oddly specific and narrowly focused set of recommendations to add to a set of guidelines that are largely very general and apply to a broad range of circumstances..
CL: Stands for “changelist,” which means one self-contained change that has been submitted to version control or which is undergoing code review. Other organizations often call this a “change” or a “patch.”
It's very interesting that so much of it is about the social aspects of engineering - a lot of it kind of reads like "don't be a jerk". In CS engineering classes, where presumably we would learn to become great engineers, I don't recall learning about any of this, and instead I remember the emphasis being on technical knowledge and accomplishment. I'd probably have been a better engineer in my early career if I'd understood how much the ability to exchange clear, constructive, and nonthreatening critique with peers actually mattered. It's reminiscent of how training in technical fields such as architecture and medicine often put the emphasis on technique over the service aspects of the job such as client interactions and bedside manner.
>In general, reviewers should favor approving a CL once it is in a state where it definitely improves the overall code health of the system being worked on, even if the CL isn’t perfect.
What is good design, what is good naming, over-engineering, etc, etc.. How on earth can a group of people agree on all those points? Look around here on HN, all those discussions.. Do we have consensus? No, and that is OK, but it's not in a team by code review.
There is only one way a group of people can have healty code reviews, that is when they like each other so much, or enough to accept what they not agree on. But man, if one or more dev's are not you're best friend, you have a problem there. Because it is not who has the best code or ideas, it's who fits best in the group, not?
But I totally agree code that comes into a code-base needs to be checked. And IMAO only by 1 developer, the LEAD who is responsible for the code-base. No democracy, no endless discussions, no 6 people spending 1 hour a day to do this.
Personally I prefer to do only gigs where there is a lead dev. I try to stay away from the popular self managing teams. I've seen enough horror, where a good developer had to leave the team because they were constantly frustrated, endless discussions about futilities, and in the end it all became personal.. Is it only me who sees this?
Here's my workflow and it works very good to get everyone on the same level:
- Each night, I go over all the commits of the day and do a code review
- Each morning at 9:00, we go over all the comments on the commints, with everyone, talk about it and make sure everyone understands.
This allows me to explain more advanced or new concepts that one developer uses to make sure everyone understands. It allows me to introduce improvements and explain why they are beneficial, and it allows me to signal when people don't follow the standards we impose; giving not only the person who did it wrong a refresher, but it makes sure everyone is reminded of the way we should write code, etc.
The feedback has been amazing and everyone loves it because it helps everyone grow and they feel like they are allowed to focus on quality; due to time pressures, coders tend to let standards slip when under pressure. When you focus on quality every day, it stays with them, because they know, if they don't deliver quality, it will be in the daily review. So far, this has not hurt production speed at all and quality has gone up a lot since we started doing this.
> if a reviewer makes it very difficult for any change to go in, then developers are disincentivized to make improvements in the future.
This is a problem I see far too often but it’s rarely talked about. Too often, engineers misinterpret “quality” code for “their” code. Code review turns from “what should we be doing here?” Into “what would this reviewing engineer name this one unimportant variable here?”
There needs to be a happy medium between velocity and quality, and increasing velocity doesn’t necessarily mean decreasing quality.
I have never not seen code reviews end up being a massive social/cultural pain point for most people involved. Most people hate getting their code reviewed in depth. In situations where there isn't just one lead dev who is responsible for reviews, this leads to "Merge Request symbiosis", where two people uncritically approve each others requests so they don't have to deal with the third.
However, you can tell this was written by an engineer. The acronym CL is used about a hundred times. There is not a single page where the first usage of the acronym explains the full term.
in real companies that don't have G's resources, quality and velocity are tradeoffs, not things you can have both of
god bless them for trying though
in particular, the idea of having one or more reviewers doing multiple rounds in a day is pure fiction in strapped startups. Rejecting a code review for readability isn't always an option. Even getting a timely review from someone who understands this part of the codebase can be unrealistic -- sometimes there is no such person. Sometimes nobody is left in the company who understands the affected module and even the PR author is taking confession before merging it.
Love the phrase 'unless it's an emergency'. In resource-constrained cos everything is an emergency because to get any non-emergency work done requires emergency level urgency.
G's way of life requires an even power balance between eng & other silos. And also requires a baseline level of technical involvement by PMs which isn't present outside big tech cos. Most companies just aren't hiring PMs with a programming background. They could never justify the spend.
Major missing section in this is 'what shouldn't you review' -- various kinds of nits and out-of-scope changes that derail the process, waste expensive CI cycles, and make people take the rest of the day off from frustration.
When I receive a PR that has actual problems, I usually make a branch from the code in question, and submit a PR to that branch with tests highlighting the problem, or the beginnings of a refactor that I'm asking for. I often don't have time to fully flesh out the problems, but helping to write the software with the requestor makes for much faster and effective feedback than plain text comments.
A bit of useful flavor of how this looks in real life:
On my current team a typical code review of about 300 lines will involve about 3 to 4 reviewers, 20 comments, and maybe 10 changes, usually over the course of about 48 hours. Because this is a culture shock to people who join the company, we explicitly train every new team member ahead of time that code review suggestions are not a reflection on your code quality or your coding ability. Reviewers are expected to err on the side of raising a comment when in doubt, even (and especially) when the issue is about the approach being taken rather than the correctness of the code itself. Not every suggestion needs to be accepted, but every concern should be addressed. While not an explicit requirement, the expectation (and typical outcome) is that the committed code represents the combined expertise of the author and all the reviewers, and everyone is happy to go on the record as having been "responsible" for the code that got submitted.
A zero-question LGTM isn't an indication of good code, it's a red flag for an incomplete review.
There is some Google-internal terminology used in some of these documents, which we clarify here for external readers:
CL:
Stands for “changelist,” which means one self-contained change that has been submitted to version control or which is undergoing code review. Other organizations often call this a “change” or a “patch.”
LGTM:
Means “Looks Good to Me.” It is what a code reviewer says when approving a CL.
Nit: Under Every Line The term "scan" is used where "skim" would be a better term.
Scan can mean either "to investigate thoroughly by checking point by point and often repeatedly" or "to glance from point to point of often hastily, casually, or in search of a particular item". Skim leaves no room for doubt: "to read, study, or examine superficially and rapidly"
I love how the first-mentioned "hardest thing in computer science," naming things, is one of the shortest sections:
> Did the developer pick good names for everything? A good name is long enough to fully communicate what the item is or does, without being so long that it becomes hard to read.
They mention several times a "Nit:" or other prefix for nonblocking comments. I wonder if that is purposeful and/or reflects the experience of the company.
I also like a lot of the points made in the CL Author's Guide [0]. I've recently finished a similar document at work on how to form good Git commits. I generally like to be able to review a GitHub PR by stepping through a sequence of well-formed commits one by one, each of which sounds like what Google is describing in this document. Regardless of how it's sliced up, by commits or PRs or whatever, in general I just wish I could get more folks on board with this part of writing code.
[+] [-] petilon|6 years ago|reply
[+] [-] pitaj|6 years ago|reply
> A particular type of complexity is over-engineering, where developers have made the code more generic than it needs to be, or added functionality that isn’t presently needed by the system. Reviewers should be especially vigilant about over-engineering. Encourage developers to solve the problem they know needs to be solved now, not the problem that the developer speculates might need to be solved in the future. The future problem should be solved once it arrives and you can see its actual shape and requirements in the physical universe.
[+] [-] jonahx|6 years ago|reply
That said...
> Note how Google does NOT say "make sure the code is properly architected".
is not accurate. The very first paragraph on the same page is:
> Design
> The most important thing to cover in a review is the overall design of the CL. Do the interactions of various pieces of code in the CL make sense? Does this change belong in your codebase, or in a library? Does it integrate well with the rest of your system? Is now a good time to add this functionality?
Emphasis mine. But that sure sounds like they care about proper architecture. They just also care about avoiding over-engineering. They're not mutually exclusive.
[+] [-] derekp7|6 years ago|reply
For me, I would rather put a bit of additional effort up front gathering enough information about the direction of the project or usage of it to better formulate an attack plan so that it does actually become more future proof. But even that is subject to failure, so what do you do?
[+] [-] kccqzy|6 years ago|reply
Just recently I was looking at Angular, Google's web frontend framework. Services, modules, directives, angular-specific markup. Coming from React, I find this grossly over-engineered.
[+] [-] DrScientist|6 years ago|reply
Leave the code in a state where it's understandable.
ie don't over specify - not just simple things like interfaces, but also things like premature decomposition, that's exposed at the higher level for configurability.
While on the face of it, having everything as small interacting functions means to change any function is quite easy, changing overall behaviour might be very hard - hard to understand all the moving parts, and even worse, if the code split is wrong for future needs, very hard to change.
[+] [-] underwater|6 years ago|reply
[+] [-] SkyBelow|6 years ago|reply
For example, had a user who wanted a document produced in one specific format. They promised this was what they wanted and it wouldn't change. It changed 5 times as we were nearing release. So I over-engineered it to allow a template to produce the document that can easily be changed (compared to the release process for a code change). It ended up being used numerous times, despite the problem were given clearly states an unchanging template.
[+] [-] ChrisCinelli|6 years ago|reply
I used to be one of them. "Growing up" I realized that striving for simplicity and adding just complexity when necessary is the ultimate sophistication.
Good engineering values and a shared vision of what "doing a good job" means helps out a lot. If we prioritize "development speed" and "quality from a user prospective", extrinsic complexity clearly becomes a burden.
Instead, if these values are not shared, the main motivation for smart people may easily become "ego" and showing off code that may look clever and supporting a lot more use cases but most likely it did not need to be written, wasting time and adding unnecessary complexity with the risk of introducing cognitive overload.
[+] [-] gwd|6 years ago|reply
I agree with this when it's internal interfaces. When you have public interfaces that you expect to have to support in a backwards-compatible manner for (ideally) years in the future, it's worth taking some time to think about how people might want to extend it in the future.
[+] [-] krona|6 years ago|reply
1: https://arxiv.org/pdf/1702.01715.pdf
[+] [-] CalChris|6 years ago|reply
[+] [-] aviraldg|6 years ago|reply
> If you see something nice in the CL, tell the developer, especially when they addressed one of your comments in a great way. Code reviews often just focus on mistakes, but they should offer encouragement and appreciation for good practices, as well. It’s sometimes even more valuable, in terms of mentoring, to tell a developer what they did right than to tell them what they did wrong.
I've seen cases where people got hundreds of comments (many of them minor, nitpicky) from more experienced developers and were discouraged by the sheer number of them. That most new developers naturally suffer from imposter syndrome is not helped at all by 100% critical code reviews.
[+] [-] ereyes01|6 years ago|reply
The basics of how they accomplished this was:
- The obvious- no personal / destructive attacks or insults, no cussing, no comments on any person's abilities.
- Having your code picked apart is a badge of honor, and you were expected / required to do the same to the most senior of your teammates when they submit code.
- Collective ownership- it's never "your" code, it's the team's code.
- Constant acknowledgement that our system is hard and complex, and it is really important that it works as expected / promised.
That last part was an ingredient I never found anywhere else. The opposite seems to be more common- every other team I've worked on tended to underestimate or even trivialize the difficulty and complexity of the systems they work on. By acknowledging that this work is hard for everyone involved, and that despite this, it must be well-done and function properly, the code review and the ensuing discussions became a very welcome and encouraged part of this process. It also helped defeat newcomers' imposter syndrome because this mentality was effective at making everyone feel like they had an important role to play, and that even the most senior folks often felt like a noob when they screwed up.
[+] [-] siquick|6 years ago|reply
This, combined with a large portion of developers lacking social empathy, poor communication skills, and (unfortunately) a desire to appear to the the smartest person in the room, can lead to severe demotivation and stress for both experienced and new team members.
[+] [-] kyrra|6 years ago|reply
Automated tools here have also drastically reduced the number of comments I get on CLs at Google, as people don't bicker about the little things as much anymore.
[+] [-] civicsquid|6 years ago|reply
I'm usually okay with preemptively accepting code with only nit comments too just to signal that those comments are not too big of an issue (if at all).
[+] [-] dahart|6 years ago|reply
[+] [-] mieseratte|6 years ago|reply
[+] [-] matwood|6 years ago|reply
Unfortunately many people stink at communication. The best thing a reviewer can do during a code review is ask questions. A reviewer who comes in and just says 'this is wrong' misses an opportunity to either learn themselves or actually teach the submitter. A much better approach is to ask the submitter why they did something certain way. What were their thoughts, and what did they see.
IME, asking questions leads to much better outcomes. Sometimes the submitter will just admit they were not thinking about anything and see their own errors. Upside, they found the issue themselves. Other times they'll explain some complexity or use case only they could see while deep in the code. Upside, I just learned something.
In general the best teachers ask questions. It doesn't put people's egos on the defensive, and tends to help people learn more effectively (the goal if any review IMO). I've seen this work on me in other scenarios like Jiu-Jitsu. My teacher will ask mid-roll or afterwards why I did something, no matter how stupid it was. Explaining my thought process helps me to better recognize the error, and I'm not immediately on the defensive. I'm then much more receptive when the teacher goes "that wasn't a terrible idea, but if you did this...".
[+] [-] Ididntdothis|6 years ago|reply
[+] [-] amxpd|6 years ago|reply
[+] [-] patthebunny|6 years ago|reply
I got 21 comments to remove blank lines and none about the code. Thanks. Really useful. This was the team of "experts" on the code base.
Then a principle engineer reviewed some other code and oh look, actual useful comments.
Nitpicking can be worse than useless. Obfuscates or ignore real issues in the code. Seems to usually be a sypmtom of the reviewing not being competent enough to actually review the code properly.
[+] [-] bt848|6 years ago|reply
Review code in this order: protocol buffers, unit tests, headers, implementation. It's common for a new employee to be an expert on C++ or Java or whatever languages but it's very uncommon to meet anyone who knows how to define a decent protocol message. The tests should give the reviewer a very clear idea of what's happening in the code (and this should be congruent with the description of the change), if not send back to author at this point. The headers should contain clear interfaces, types, and comments and should not contain anything that's not part of the API (when this is technically possible). Finally look in the CC file; at this point the reviewer should see things they were already expecting and no funny business.
Any claims about speed, efficiency, or performance whether in comments or during the review must have accompanying microbenchmarks or they should be deleted. Wrong ideas about software performance abound, and even correct beliefs become incorrect with time, in which case the microbenchmarks are critical to evaluating the continued value of the code.
When sending a changelist for review, always clear all automated warnings or errors before wasting the reviewers' time. Nobody wants to see code that doesn't build, breaks a bunch of tests, doesn't lint, etc.
[+] [-] foobarian|6 years ago|reply
1) Decision to use some kind of 3rd party library that purportedly speeds up handling primitive types (Trove specifically. Almost never worth it, and it's extra painful because it doesn't plug in well into the rest of the standard library).
2) Using mutable objects to avoid the inefficiencies of allocation and copying with immutable patterns. Can lead to a brittle stateful design with hard to track down bugs.
3) Reusing serialization artifacts across the call stack to save copying/allocation, again like 2). Now you end up coupling the API interface to layers deep in the call stack making things harder to modify.
Also best review advice I got and follow: be the first reviewer yourself! Looking at a nicely formatted diff really lets you see your code in a new light, and usually I end up doing several iterations before I run out of things to tweak. Things like forgetting debug log statements or commented out code, or dev settings, as well as other easy but effective improvements.
[+] [-] polishTar|6 years ago|reply
That's way too strict. If I want to suggest moving a statement that's inside a loop to the outside, I don't want to waste my time writing microbenchmarks showing that it improves performance.
If the suggestion is complicated or non-obvious, and it's in a really critical part of the code, then sure, write a benchmark. Outside of that though, it's rarely a good use of developer time.
[+] [-] teacpde|6 years ago|reply
Generally true, but not when prototyping. As a reviewer, I would like to see the general idea before spending time on cleaning up. If the grand design is wrong, time spent on cleaning up would be a waste.
[+] [-] Rapzid|6 years ago|reply
[+] [-] YZF|6 years ago|reply
[+] [-] mattsfrey|6 years ago|reply
[+] [-] jdm2212|6 years ago|reply
[+] [-] enjoyyourlife|6 years ago|reply
[+] [-] atsaloli|6 years ago|reply
CL: Stands for “changelist,” which means one self-contained change that has been submitted to version control or which is undergoing code review. Other organizations often call this a “change” or a “patch.”
[+] [-] unknown|6 years ago|reply
[deleted]
[+] [-] ridaj|6 years ago|reply
[+] [-] stackzero|6 years ago|reply
This is a great rule of thumb
[+] [-] aledalgrande|6 years ago|reply
> It is the end of the day on a Friday and it would just be great to get this CL in before the developer leaves for the weekend.
I laughed out loud because it reminded me of so many times I have seen it happen and then someone had to fix in the weekend.
Who shares the same experience?
[+] [-] siempreb|6 years ago|reply
There is only one way a group of people can have healty code reviews, that is when they like each other so much, or enough to accept what they not agree on. But man, if one or more dev's are not you're best friend, you have a problem there. Because it is not who has the best code or ideas, it's who fits best in the group, not?
But I totally agree code that comes into a code-base needs to be checked. And IMAO only by 1 developer, the LEAD who is responsible for the code-base. No democracy, no endless discussions, no 6 people spending 1 hour a day to do this.
Personally I prefer to do only gigs where there is a lead dev. I try to stay away from the popular self managing teams. I've seen enough horror, where a good developer had to leave the team because they were constantly frustrated, endless discussions about futilities, and in the end it all became personal.. Is it only me who sees this?
[+] [-] NKCSS|6 years ago|reply
- Each night, I go over all the commits of the day and do a code review - Each morning at 9:00, we go over all the comments on the commints, with everyone, talk about it and make sure everyone understands.
This allows me to explain more advanced or new concepts that one developer uses to make sure everyone understands. It allows me to introduce improvements and explain why they are beneficial, and it allows me to signal when people don't follow the standards we impose; giving not only the person who did it wrong a refresher, but it makes sure everyone is reminded of the way we should write code, etc.
The feedback has been amazing and everyone loves it because it helps everyone grow and they feel like they are allowed to focus on quality; due to time pressures, coders tend to let standards slip when under pressure. When you focus on quality every day, it stays with them, because they know, if they don't deliver quality, it will be in the daily review. So far, this has not hurt production speed at all and quality has gone up a lot since we started doing this.
[+] [-] shakezula|6 years ago|reply
This is a problem I see far too often but it’s rarely talked about. Too often, engineers misinterpret “quality” code for “their” code. Code review turns from “what should we be doing here?” Into “what would this reviewing engineer name this one unimportant variable here?”
There needs to be a happy medium between velocity and quality, and increasing velocity doesn’t necessarily mean decreasing quality.
[+] [-] Traubenfuchs|6 years ago|reply
[+] [-] esolyt|6 years ago|reply
However, you can tell this was written by an engineer. The acronym CL is used about a hundred times. There is not a single page where the first usage of the acronym explains the full term.
[+] [-] awinter-py|6 years ago|reply
god bless them for trying though
in particular, the idea of having one or more reviewers doing multiple rounds in a day is pure fiction in strapped startups. Rejecting a code review for readability isn't always an option. Even getting a timely review from someone who understands this part of the codebase can be unrealistic -- sometimes there is no such person. Sometimes nobody is left in the company who understands the affected module and even the PR author is taking confession before merging it.
Love the phrase 'unless it's an emergency'. In resource-constrained cos everything is an emergency because to get any non-emergency work done requires emergency level urgency.
G's way of life requires an even power balance between eng & other silos. And also requires a baseline level of technical involvement by PMs which isn't present outside big tech cos. Most companies just aren't hiring PMs with a programming background. They could never justify the spend.
Major missing section in this is 'what shouldn't you review' -- various kinds of nits and out-of-scope changes that derail the process, waste expensive CI cycles, and make people take the rest of the day off from frustration.
[+] [-] flerchin|6 years ago|reply
[+] [-] tylerl|6 years ago|reply
On my current team a typical code review of about 300 lines will involve about 3 to 4 reviewers, 20 comments, and maybe 10 changes, usually over the course of about 48 hours. Because this is a culture shock to people who join the company, we explicitly train every new team member ahead of time that code review suggestions are not a reflection on your code quality or your coding ability. Reviewers are expected to err on the side of raising a comment when in doubt, even (and especially) when the issue is about the approach being taken rather than the correctness of the code itself. Not every suggestion needs to be accepted, but every concern should be addressed. While not an explicit requirement, the expectation (and typical outcome) is that the committed code represents the combined expertise of the author and all the reviewers, and everyone is happy to go on the record as having been "responsible" for the code that got submitted.
A zero-question LGTM isn't an indication of good code, it's a red flag for an incomplete review.
[+] [-] heelix|6 years ago|reply
[+] [-] Noxmiles|6 years ago|reply
There is some Google-internal terminology used in some of these documents, which we clarify here for external readers:
CL: Stands for “changelist,” which means one self-contained change that has been submitted to version control or which is undergoing code review. Other organizations often call this a “change” or a “patch.”
LGTM: Means “Looks Good to Me.” It is what a code reviewer says when approving a CL.
quote from: https://google.github.io/eng-practices/#terminology
[+] [-] unknown|6 years ago|reply
[deleted]
[+] [-] jdhzzz|6 years ago|reply
Scan can mean either "to investigate thoroughly by checking point by point and often repeatedly" or "to glance from point to point of often hastily, casually, or in search of a particular item". Skim leaves no room for doubt: "to read, study, or examine superficially and rapidly"
[+] [-] sixstringtheory|6 years ago|reply
> Did the developer pick good names for everything? A good name is long enough to fully communicate what the item is or does, without being so long that it becomes hard to read.
They mention several times a "Nit:" or other prefix for nonblocking comments. I wonder if that is purposeful and/or reflects the experience of the company.
I also like a lot of the points made in the CL Author's Guide [0]. I've recently finished a similar document at work on how to form good Git commits. I generally like to be able to review a GitHub PR by stepping through a sequence of well-formed commits one by one, each of which sounds like what Google is describing in this document. Regardless of how it's sliced up, by commits or PRs or whatever, in general I just wish I could get more folks on board with this part of writing code.
[0]: https://google.github.io/eng-practices/review/developer/