I have found that teams who allow larger PR sizes without complaining about it can get a lot done a lot faster. Which is to say having 50 line PRs are not ubiquitously ideal, but are somewhere on the spectrum of acceptable but come with definitive tradeoffs. Reviewing and merging 10x60 line PRs is, in my experience, more time intensive than reviewing one 600 line PR.
Most of the tests for and new functionality alone in any of our PRs well exceed 50 lines. Indeed we have 2-3x more test code than actual code. So should we be writing less tests? Merging the code first and later the tests? Merging the broken, codeless tests first and then the code?
It’s all ridiculous. Just make PRs that represent unit change, whatever the means to you. The unit functionality is independent of lines of code. Sometimes that is 12 lines, sometimes it’s 800. Yeah large PRs are harder to understand but that’s why you have tests. Also XXL PRs aren’t usually happening every day. If they are, you have a very productive team and maybe you should count yourself lucky.
If it's because those PR's are getting reviewed more thoroughly, that's likely a great use of time.
If it's because the small PR's are getting bikeshedded, it's not.
If it's because of tooling, then the OP at graphite.dev has something to sell you. I'd be buying but we're a GitLab shop.
> Indeed we have 2-3x more test code than actual code. So should we be writing less tests? Merging the code first and later the tests? Merging the broken, codeless tests first and then the code?
I find that test code is rarely reviewed properly. If you want it reviewed properly, then create MR's that are just tests. You can create an MR with the change and happy path code, and then subsequent MR's with each of the rest of the tests.
If you're fine with tests not getting the same level of attention (and that's likely OK), then tests don't count as part of the 50 lines, IMO.
I couldn't disagree more. My current team has a history of large PRs and everything was a dumpster fire until we started really hammering down on PR size. Prod broke all the time, reverting code was a nightmare because of how big the "units" were, code review was not thorough, problems problems problems.
> large PRs are harder to understand but that’s why you have tests
I'd rather have comprehensible, well-reviewed code than test coverage (I'd rather have both, of course). We've got a mission critical module that is a crusty piece of junk that everyone is scared to change because of how much of a mess it is, even with its amazing test coverage (frankly the code is so bad that I assume the tests are just as bad anyway).
> XXL PRs aren’t usually happening every day. If they are, you have a very productive team and maybe you should count yourself lucky
I've never seen an XXL PR that improved productivity. They pile up tech debt, bugs, and down time. I don't care how fast the devs are moving if they bring prod down every week. That is absolutely unacceptable.
On a team of superstar, perfect developers that all understand the code base perfectly, I'd probably agree with you. If that's your situation then I envy you :) But I've never been on such a team.
Measuring the total size of the PR seems to me to be missing a dimension, namely the commit size. What if a 600-line PR consist of 10 60-line reviewable commits? Which, if not for GitHub’s broken review UI, is what I would hope is the goal for such a thing. Then you have a reasonable amount of code to look at per commit, and all in context of the larger overall change.
Refactorings and redesings could easily span 1k lines even for small projects. Yes, almost always you could make it incremental, but total lines counts would increase and it requires hard planing beforehand. Too much fuzz.
> Reviewing and merging 10x60 line PRs is, in my experience, more time intensive than reviewing one 600 line PR.
The article disagrees, and the article has data:
> Some folks might wonder if writing small PRs leads to less code output in total. We all want to be high velocity, but there are times when you need to be high volume. Consistently writing sub-20 line PRs will have a significant impact on your net coding ability - but interestingly, so will writing PRs greater than 100+ lines. The highest volume coders and repositories have a median change size of only 40-80 lines. I suspect this is because the volume of code is change-size * speed of changes. Too small of changes, and the faster merge time doesn't make up for it. Too large, and you start getting weighted down by slower review and merge cycles.
I agree that you shouldn't be dogmatic about PR size - there's an art to engineering as much as it's a science. But part of that art might also be recognizing that "unit change, whatever that means to you" is - as you suggest - a flexible concept. The takeaway for me here is, given what the data shows, when you end up with a 600-line unit it might be worthwhile to try to, if possible, find a way to break that unit down.
I'm probably in the minority here, but personally I'd much rather review a 300 line PR instead of 6 50-line ones if the change is a single context.
I briefly worked with a hard line-count-limit for PRs and I thought it made everything much worse. It is fine for changes that are actually small, but when you need to go back and re-open 4, 5 merged PRs in different tabs to get the full context again, the time to review increases tenfold with tiny PRs that don't really make complete sense by themselves.
I have worked with co-workers that have the complete opposite preference, though, and anything over a set amount of lines wouldn't even be reviewed.
Interesting to see the numbers on the article, however. My anecdotal experience would make me guess the opposite. I feel like work slows to a crawl once the PRs are artificially limited and broekn up like that, specially in fast moving companies and startups.
Line count (or count of any units in the PR) is completely meaningless.
I feel that in the last decade somehow there has been a loss of context of what is the purpose of version control.
Why do we bother with version control? Sure, there are many reasons.
But a primary reason is that if we discover behavior X turns out to cause regressions (or is otherwise a bad idea), we can easily revert it by reverting its commit.
That's a why a single commit should contain a single behavior change, and contain the entirety of that single behavior change.
If the change is split among multiple commits, it'll be a pain to revert it. If the change is contained within a commit that changes other things, it'll be an ever larger pain to revert it.
So that's my rule. A commit is single behavior change, all of it, and nothing else.
You can't express that in lines. Might be 1 line might be lots.
> but when you need to go back and re-open 4, 5 merged PRs in different tabs to get the full context again, the time to review increases tenfold with tiny PRs
You can't really say much about the first 4 PRs because if you don't know how that code is going to be used, it's kind of hard to giving meaningful feedback. By PR 5, it's too late to give feedback on PRs 1-4.
I also have worked with people that prefer that but I never understood it. Code without context isn't reviewable imo
My bar is if its more than 20 files, or breaks the version control UI, which I've seen in bitbucket some years ago, then your PR is a little too much, and if its a major automated refactor of some sort, it should not contain any other code changes outside of what's necessary to make that specific change work as intended. Other changes should be their own discrete PRs as follow ups.
I just try to be reasonable about PRs. Everyone will have their preference, I just think as long as you can go back and figure out what happened, why the change was made, and do it quickly enough, then you're golden.
Lines with actual code changes aren't a problem for me, it's the automated IDE indent/spacing/bracketing that really drives me up a wall and fatigues the hell out of me.
But this might only be a problem with those of us working on legacy codebases. The kind of PRs I see in OSS projects I could review 1000 lines at a time - it's so clean!
For my part, ever since I was a wee lad learning to program for the first time, I've been of the opinion that line counts themselves are a largely worthless metric for anything.
How many lines code takes up is too variable between languages, too variable within the same language, too dependent on irrelevancies like formatting, etc.
What actually matters is logical and conceptual complexity, not line counts.
How reliable is reversion as a metric? In my experience, the problem with 50 line PRs (that aren't just making a quick adjustment to something, but actually trying to build new functionality in sets of 50 line PRs) is that each change is so isolated, that when everything has to be combined into an actual working feature, the small changes aren't coordinated with each other. The PR that adds the DB migration doesn't add the appropriate columns needed to actually store the information sent from the frontend, etc. All the small things that can't be planned for in advance and are only discovered during implementation.
Those small PRs might not be reverted, but there are probably a fair number of 50 line PR follow ups that end up modifying the original implementation, and using up more total time and effort than if it was just a larger PR implementing the feature in its entirety.
I wonder how many of the large PRs they say take longer to merge and get less comments are actually feature branches into which the smaller PRs are merged, and which we expect to hang open for long periods of time with little interaction.
This analysis seems to take a set of data without much investigation into what the data actually represents and makes an immense effort to find some conclusions about the context-less data.
Reversion is an unreliable metric, because some PRs will be large enough that they NEVER get reverted, even if they should because it's easier to just fix the broken PR with another PR (about 50 lines should do it).
I’m reading this article on a break from reviewing a PR with more than 300 lines changed over 20+ files and I wonder if some people simply live in some sort of alternative wonderland universe.
The “ideals” are almost never a thing I see in my daily job, and I’ve worked for countless companies as a contractor.
I file this under the category of "needless rules created without sufficient context". The author's experience is not sufficient to generalize to the diversity of development environments where pull requests are utilized. Hopefully, OP is a good listener and can compromise when their strong opinions meet resistance from other developers at their organization.
For me personally, it shouldn't be about line count. A PR should have scope of a single feature/piece of functionality. Or if it is a PR for a bug fix, 1 bug fix per PR. If its 1 line, 2 lines, 50 lines, 500 lines, I don't care. Each PR should address one single problem.
If I need make a new thing - for example, introduce an end-to-end benchmark suite for a service, it may take something like 2000 lines. If I split that up into 50 line PRs, it’s 40 PRs. I’ll also need to write a pre-read document explaining the overall design because PR descriptions for each change individually won’t be enough context for a reviewer to understand.
If CI takes 15 minutes and I create PRs one by one, I’ll need to wait a total of 10 hours just for CI, which is impossible, I’ll need to use stacked PRs. If I’m using stacked PRs, that means I do all the work up front, and then try to painstakingly split up my final code into little 50 line chunks that build incrementally. I’m sure someone will say “oh just write incremental small commits to begin with”, but that never seems viable to me. Code and design evolve as I build in ways that would make the review of the first 50 lines I commit on the project pointless since none were retained in the final product. So not only do I have to build this feature, and write 40 PR descriptions, and write an overall meta-PR-description for the change in aggregate, I also need to cut my finished code up into puzzle pieces for reviewers to mentally re-assemble.
Anyways, maybe for maintaining mostly-done features 50 lines is nice, but I’d much rather a new feature or larger change be a handful (1-3) of larger (500-2000 line) PRs possibly accompanied by pair-review/walkthrough discussion.
The correct PR is the one that solves the problem the best. The correct PR's size has nothing to do with the # of lines it is. Tracking time to merge is not that meaningful because merging code does not in itself say anything about the value of that code in production. That is the type of metric that may become a target for a scrum master or project manager despite it being fairly divorced from the reality of moving the right code into the production environment.
Lots of commenters seem to be arguing against straw men. The article doesn't say "you need to follow this exact rule on all your PRs"; it doesn't frame any of this as a hard limit or a "best practice", it's just saying that small PRs are easier to understand and review. This doesn't seem terribly controversial to me. The fact that the article actually includes data in support of this thesis is icing on the cake.
Now, yes, it can be a bit more work to split up big PRs into smaller standalone changes that can be merged in isolation, but it seems plausible that the benefits might outweigh the extra work in many cases. The whole idea of a version control system is to create a useful history of changes over time; making each individual changeset simpler and easier to understand seems like an admirable goal.
There's no context in this to say what kind of change that is referring to.
I can make hundreds of 2 line PRs fixing typos and reformatting text or whatever else low stake change that are as small as this.
So, yeah, obviously those low-stake changes that don't really do much will be quick to review and merge. Does that make them any better? Why are we optimizing for review speed alone in the first place?
It's just silly statistics being thrown around. There's no correlation in lines of code or review speeds for value added.
A larger PR that actually does something and requires more thorough review will inevitably take longer. Does that mean it would be better to have 10 separate PRs with no connection between them, different reviewers and 10x the feedback loops? This for me is missing the forest for the trees.
This honestly reminds me of way back when Agile/Scrum was the new hot thing, and my company implemented it where we ended up having 10x the stories taking twice the time.
Local optimization on these sorts of metrics are pretty much always going to lead to perverse outcomes.
Same rat brain muscle that made me buy an extra 200,000 dollars of stuff I didnt really need on my home, but I spent 6 days at Home Depot fighting over which color of mustard to paint the guest bathroom
> 50-line code changes are reviewed and merged ~40% faster than 250-line changes.
Reviewing a 50-line code change in 60% of the time it takes to review a 250-line code change means the shorter code review takes _four times_ as long to review per line.
Also splitting the 250-line change into 50-line parts will probably involve more total lines - to ensure it can be done piece-by-piece then fit together later.
So not only is it more time per line, it's also more lines.
Basically the only benefit is PRs per day, which is... not useful in itself.
There are two ways of looking at this, I think the author took the positive way, and you've taken the negative.
You could argue that shorter PRs are easier to review and therefore people are doing more review, and that should result in better changes (indeed the 15% fewer reverts supports this).
Or you could argue that shorter PRs are harder to review as they have less context, and therefore more time is taken but that a better output is not necessarily being reached.
I think both of these are entirely plausible, and the truth probably depends on other process factors.
To say it another way, you should not expect more code to be merged in the same amount of time as less code. 250 lines is 5x more code, so what is the purpose of a statistic showing 1/5 of the code gets merged faster? That's not a good or a bad thing.
Unless the metric is actually 40% faster per LOC but that's not specified.
Just my comments are 50 lines, but thanks I'll never stop finding these "functions must be this long and classes must have that many properties, and PR should be so many lines" etc. rules hilarious.
It cheers me up that people so naive exist.
Size is contextual to the thing the PR regards. Maybe it's one char. Maybe it's 500 lines. Maybe it's 10k lines. Screw this.
Seems like a skill issue, honestly. I feel like many of these arbitrary rules end up being made to account for low skill developers to curb their behaviors which seems like the wrong way to tackle that problem.
If someone sends me an multi-thousand line CR, I'll review it and give them impactful feedback. I may miss some details, but I guarantee I'll miss close to all of the details if that is split over 40 CRs. I wouldn't want to work with someone who has time to review like that unless it's a very niche domain.
We merged a PR a few days ago that was 40,000 lines added and about 13k lines removed. Tbf, there was quite a bit of config and generated code in there, but the actual changes were at least half of that.
It was tested in smaller chunks in a feature branch, then the larger feature was merged. There was some testing done on the huge feature branch, but not everything that was present in the smaller PRs.
In our case, this new "feature" was a complete overhaul of our business logic engine. Over time, we had essentially developed 3 or 4 different API versions for various business tasks, with a lot of logic being duplicated between them. We want to bring it all under one umbrella, and after about 8 years, we think that we have a good idea of what sort of abstraction can work for the next 8 years.
Now that the engine is merged, we need to start the grueling task of actually moving logic from the old APIs into it and removing them from our codebase. We spent about 6 months (between 2 devs) writing the engine and I suspect it will take 2 years to move all of the logic into the new engine.
Idk where I was going with this comment anymore, but needless to say, I don't often see 50 line PRs at work!
This is one of those cases of where we're focusing on the wrong metric for a very generic problem.
There's too many facets to what makes a PR good that boiling it down to line count is silly.
Ex. Small code changes are required if the solution requires it - forcing 5 lines change to 50 lines would be considered wildly bad idea. On the other hand a 500 line PR to solve one problem suddenly broken down to 10 PRs makes the review far harder depending on the context.
We went through a relatively controversial process w.r.t. enforcing this at my last co. What it really came down to was intent vs. impact - the intent of the change was to enforce better development practices, especially across junior engineers that had shipped monster prs containing multiple bugs that were extremely hard to catch ("the easiest way to ship a one line change is in a thousand line pr").
This worked reasonably well in terms of forcing junior engineers to reason harder about what an atomic change should look like, but also impacted senior folk that were making atomic changes that encompassed broad areas of the codebase. We eventually reverted the change and relied on implementing CODEOWNERs to enforce senior devs to request changes when PRs got out of hand.
There's definitely nuance to enforcement of best practices across an entire organization; haven't really seen it done well although I've primarily worked at startups.
PR line count is such a silly metric. For example my recent PR +600/-900 lines. What you can deduct from that? Is it complex?
Changes were quite simple. 600 common lines are about rearranging big module into multiple smaller modules. 300 removed lines are technically could be in a separate PR, but it's some cleanup aftermath related to rearrangement.
If you change enough at once GitHub gives up and displays "infinity files changed". Likewise for lines. I've gotten such a PR merged in a finite amount of time. Thus, by GitHub math, infinitely large PRs merge faster per line changed than any other size.
PRs that are <50 lines are more often than not documentation, typos and small fixes in my experience.
Most real value add PRs end up bigger due to tests, documentation changes, and the actual change.
This for me is the typical claim that'll become an example of Goodhart's law when implemented. Random people that are not engineers will use this to claim "your PR should be 50 lines long" as some sort of absolute truth, when in reality the number of lines of code is absolutely irrelevant most of the time.
Context and details matter. Sometimes multiple small PRs are better. Sometimes a larger PR is better. It always depends. These absolutist claims that "this is better" is what leads to atrocious management practices that always end up getting summarized to a number, losing all of its meaning in the process.
Had we not we moved on from the lines of code metrics already decades ago? :facepalm:
For a large PR, I usually regroup my changes into smaller unit commits which can be reviewed commit by commit.
For an even larger (or more complex) PR, I'll make a set of stacked PRs which can be reviewed in pieces, each with a good description of what/why/how stuff is happening in that piece. Once all PRs have been approved, the lot of them can be deployed together.
I've at times had 3 PRs for a total of 3 line changes, because they were for a single table's schema migration picking new indexes and each had many factors to consider. The PR descriptions were of course much longer.
This all depends on whether you believe in merging unfinished work to master, or whether you believe features should be completed and tested before merging. Personally I'm in the latter camp, but some of the big tech companies seem to have a style of merging commits representing partial progress toward a feature.
I personally mostly care about changes per commit. But Bitbucket is terrible about showing commit context to reviewers. And apparently GitHub is not any better.
Showing some context per commit makes all the difference when I decide to do a first commit which fixes all formatting in the file that I’m visiting (for the real/substantive change).
I'm seeing some pretty surprising claims in this thread about, "I prefer reviewing larger PRs," and, "Teams that make larger PRs go more quickly." That all seems like rubbish to me. I've seen data indicating smaller PRs having lower error rates and such and then we have this article with data recommending a 50 line PR size. I've *never* seen data show large PRs are better except for some developers saying, "I prefer it that way."
Anyways, I don't so much care about the size of a PR as much as I care about how understandable it is. That generally ends up being a few rules of thumb like:
1. If you change a common function name, change it everywhere but please don't do anything else in the same commit.
2. If you need to make a change in code you are depending on to accomplish a goal in your own code, try to break the change to the dependency into another commit. There is a little bit of an art here to both understand how to break things up and to not break them up too small.
3. Try to make your history meaningful rather than just having a bunch of random commits like "fix", "forgotten file", etc.
Yeah. Last team I joined I started doing reviews and immediately I was dealing with, like, 500 line PRs. They’d do things like add three completely new API endpoints (one of which is broken), or set up an entirely new service with all sorts of associated infrastructure.
At first I faced a lot of pushback from team members when I wanted them to break up the commits. So I caved. I just tried to give the feedback that I could give, and approved the PRs when they were “good enough, I guess”. They kept breaking in production and it would take a week to fix them. It was hard to tell what, in the commits, was broken. They were just so large.
And it’s not like these PRs were getting written quickly. People would spend two weeks or four weeks working on one PR, because they wanted to make one massive change that did everything and closed out a ticket.
At this point, I’ve made inroads and the team is sending out much smaller PRs, much more frequently. The PRs are getting reviewed, merged, and tested within an hour, and one developer can submit three or four PRs if they’re being productive that day.
I think the problem is that the cognitive load of making large changes is superlinear. A 2x larger change is not 2x as hard, but maybe 2.5x or 3x as hard. With large PRs, you spend way too much time just trying to understand what you are doing, as an author, or what you are looking at, as a reviewer.
I too am very surprised by what I'm reading. For contrast, here's a recent thread of people explaining how beneficial it is to work in small commits: https://news.ycombinator.com/item?id=39187487
I always try to care about the reviewers. Not too large PRs so they don't LGTM it instantly. Not too small PRs so they don't have to switch contexts multiple times a day or search through many smaller PRs to get context why new PR changes specific lines.
Had the same thought reading this, maybe it’s because I work in OSS and write tests with changes and fix tests for fixes that weren’t caught. That adds 20-40 lines very fast alone. Guessing that is not common.
This has never, and probably never will, be anything more than a theory. In my experience this is only true if:
1. The "unit of work" is small enough to decompose into a 50 line PR. This is possible on larger codebases and less verbose languages. It can enforce over-DRY code which often leads to a lower universal ability to understand what is going on.
2. The team responsible for reviewing is responsive. I have never worked at a company where PR reviews happened quickly. Everyone is busy, everyone is overworked, and often times PRs are a chore that can be put off until later. Of course you can make some labyrinthine chain of small PRs all merging into one big PR but the problem still exists. I'd personally rather review a 300 line complete PR written well than 6 PRs that I have to constantly recall context on.
This advice follows the same logic as the asinine cyclomatic complexity rules. Yes, in theory they are sound. In practice, it's more of a flexible benchmark rather than a hard rule. But since someone inevitably writes YASL (yet another stupid linter) that enforces these types of things, an enterprising work-focused engineer will end up spending either more time fixing code to please the linter or more time exploiting loopholes in the rules.
Just write Good Code (TM) - whatever that looks like.
Once PRs start to exceed 10k lines of code, they seem to become slightly “safer.”
I suspect this is because the extreme end of PR sizes includes refactors, which
maybe start including less functionality change and therefore have a slightly
lower chance of breaking. Alternatively, engineers may become progressively more
reluctant to revert PRs after 10k lines because of emotional anchoring and merge
conflicts.
Big massive refactoring PR's have killed many projects. Sometimes you can't, but if you can split a big refactoring PR into multiple commits it usually works better. The first PR introduces a feature flag that lets you pick between old and new code.
So if we fix a > that should have been >= we should add 49 lines of comments explaining this? Or perhaps the PR is doomed to never be ideal due to being short.
I think this might just highlight how simple changes are indeed simpler instead of showing that slicing one big PR into severall smaller ones is actually good.
> If what you care about is maximizing the amount of engagement and feedback on your code over the long run, you’re best off writing as small of PRs as possible.
> The highest volume coders and repositories have a median change size of only 40-80 lines.
This claim is rational under the Facebook axioms, so to speak.
It's times like these when i question the quality of HN or how can such shit get so many upvotes?
> We flew down weekly to meet with IBM, but they thought the way to measure software was the amount of code we wrote, when really the better the software, the fewer lines of code.”
— Bill Gates
The response to seeing 10k lines of code in a PR is not to skim it, that is harming one's company. The response is to request the author to justify its size, and/or reject the PR and request that it be delivered in smaller sizes. If the author cannot, then the author needs their responsibilities reduced and to receive additional mentorship.
Skimming a PR is pure cognitive dissonance, dont do this and complain when your app turns into a dumpster fire.
I would kill myself if I worked somewhere where they actually did this type of analysis. I cannot wait for AI to replace me and I become homeless but free from a horrible choice to get involved with computers many years ago. Damning me to these type of just asinine bullshit conversations and posts. I wish I had made different choices.
Just use your brain to PR a feature or some sub-piece of a feature instead of this arbitrary shit.
Who wants to spend hours bothering with hardline process - you’re the programmer, you’re going to be the guy answering any questions on your pull; therefore you’re going to have to look at it yourself.
These types of arbitrary rules are why projects end up depending on deprecated packages. Nobody can do it so everyone whistles around a slowly smoldering dumpster fire.
Correlation is not causation. What if changes that are inherently more complex — which take longer to review, are more likely to have bugs, etc — typically require 250 lines or more?
appplication|2 years ago
Most of the tests for and new functionality alone in any of our PRs well exceed 50 lines. Indeed we have 2-3x more test code than actual code. So should we be writing less tests? Merging the code first and later the tests? Merging the broken, codeless tests first and then the code?
It’s all ridiculous. Just make PRs that represent unit change, whatever the means to you. The unit functionality is independent of lines of code. Sometimes that is 12 lines, sometimes it’s 800. Yeah large PRs are harder to understand but that’s why you have tests. Also XXL PRs aren’t usually happening every day. If they are, you have a very productive team and maybe you should count yourself lucky.
bryanlarsen|2 years ago
If it's because those PR's are getting reviewed more thoroughly, that's likely a great use of time.
If it's because the small PR's are getting bikeshedded, it's not.
If it's because of tooling, then the OP at graphite.dev has something to sell you. I'd be buying but we're a GitLab shop.
> Indeed we have 2-3x more test code than actual code. So should we be writing less tests? Merging the code first and later the tests? Merging the broken, codeless tests first and then the code?
I find that test code is rarely reviewed properly. If you want it reviewed properly, then create MR's that are just tests. You can create an MR with the change and happy path code, and then subsequent MR's with each of the rest of the tests.
If you're fine with tests not getting the same level of attention (and that's likely OK), then tests don't count as part of the 50 lines, IMO.
danielovichdk|2 years ago
Sure.
How many times is the code touched after the PR has been merged is a better question.
Faster is not better. Better is not faster.
herpdyderp|2 years ago
> large PRs are harder to understand but that’s why you have tests
I'd rather have comprehensible, well-reviewed code than test coverage (I'd rather have both, of course). We've got a mission critical module that is a crusty piece of junk that everyone is scared to change because of how much of a mess it is, even with its amazing test coverage (frankly the code is so bad that I assume the tests are just as bad anyway).
> XXL PRs aren’t usually happening every day. If they are, you have a very productive team and maybe you should count yourself lucky
I've never seen an XXL PR that improved productivity. They pile up tech debt, bugs, and down time. I don't care how fast the devs are moving if they bring prod down every week. That is absolutely unacceptable.
On a team of superstar, perfect developers that all understand the code base perfectly, I'd probably agree with you. If that's your situation then I envy you :) But I've never been on such a team.
wrs|2 years ago
bvrmn|2 years ago
wk_end|2 years ago
> Reviewing and merging 10x60 line PRs is, in my experience, more time intensive than reviewing one 600 line PR.
The article disagrees, and the article has data:
> Some folks might wonder if writing small PRs leads to less code output in total. We all want to be high velocity, but there are times when you need to be high volume. Consistently writing sub-20 line PRs will have a significant impact on your net coding ability - but interestingly, so will writing PRs greater than 100+ lines. The highest volume coders and repositories have a median change size of only 40-80 lines. I suspect this is because the volume of code is change-size * speed of changes. Too small of changes, and the faster merge time doesn't make up for it. Too large, and you start getting weighted down by slower review and merge cycles.
I agree that you shouldn't be dogmatic about PR size - there's an art to engineering as much as it's a science. But part of that art might also be recognizing that "unit change, whatever that means to you" is - as you suggest - a flexible concept. The takeaway for me here is, given what the data shows, when you end up with a 600-line unit it might be worthwhile to try to, if possible, find a way to break that unit down.
hugocbp|2 years ago
I briefly worked with a hard line-count-limit for PRs and I thought it made everything much worse. It is fine for changes that are actually small, but when you need to go back and re-open 4, 5 merged PRs in different tabs to get the full context again, the time to review increases tenfold with tiny PRs that don't really make complete sense by themselves.
I have worked with co-workers that have the complete opposite preference, though, and anything over a set amount of lines wouldn't even be reviewed.
Interesting to see the numbers on the article, however. My anecdotal experience would make me guess the opposite. I feel like work slows to a crawl once the PRs are artificially limited and broekn up like that, specially in fast moving companies and startups.
jjav|2 years ago
I feel that in the last decade somehow there has been a loss of context of what is the purpose of version control.
Why do we bother with version control? Sure, there are many reasons.
But a primary reason is that if we discover behavior X turns out to cause regressions (or is otherwise a bad idea), we can easily revert it by reverting its commit.
That's a why a single commit should contain a single behavior change, and contain the entirety of that single behavior change.
If the change is split among multiple commits, it'll be a pain to revert it. If the change is contained within a commit that changes other things, it'll be an ever larger pain to revert it.
So that's my rule. A commit is single behavior change, all of it, and nothing else.
You can't express that in lines. Might be 1 line might be lots.
jvans|2 years ago
You can't really say much about the first 4 PRs because if you don't know how that code is going to be used, it's kind of hard to giving meaningful feedback. By PR 5, it's too late to give feedback on PRs 1-4.
I also have worked with people that prefer that but I never understood it. Code without context isn't reviewable imo
giancarlostoro|2 years ago
I just try to be reasonable about PRs. Everyone will have their preference, I just think as long as you can go back and figure out what happened, why the change was made, and do it quickly enough, then you're golden.
degenerate|2 years ago
But this might only be a problem with those of us working on legacy codebases. The kind of PRs I see in OSS projects I could review 1000 lines at a time - it's so clean!
JohnFen|2 years ago
For my part, ever since I was a wee lad learning to program for the first time, I've been of the opinion that line counts themselves are a largely worthless metric for anything.
How many lines code takes up is too variable between languages, too variable within the same language, too dependent on irrelevancies like formatting, etc.
What actually matters is logical and conceptual complexity, not line counts.
ysavir|2 years ago
Those small PRs might not be reverted, but there are probably a fair number of 50 line PR follow ups that end up modifying the original implementation, and using up more total time and effort than if it was just a larger PR implementing the feature in its entirety.
I wonder how many of the large PRs they say take longer to merge and get less comments are actually feature branches into which the smaller PRs are merged, and which we expect to hang open for long periods of time with little interaction.
This analysis seems to take a set of data without much investigation into what the data actually represents and makes an immense effort to find some conclusions about the context-less data.
bombcar|2 years ago
Zealotux|2 years ago
The “ideals” are almost never a thing I see in my daily job, and I’ve worked for countless companies as a contractor.
xbar|2 years ago
For me, I feel like the difficulty score of performing an effective review is slightly greater than nlogn but less than n*root(n).
300 is a bit beyond my favorite limit of 200.
waffletower|2 years ago
martpie|2 years ago
avgcorrection|2 years ago
tcmart14|2 years ago
jitl|2 years ago
If CI takes 15 minutes and I create PRs one by one, I’ll need to wait a total of 10 hours just for CI, which is impossible, I’ll need to use stacked PRs. If I’m using stacked PRs, that means I do all the work up front, and then try to painstakingly split up my final code into little 50 line chunks that build incrementally. I’m sure someone will say “oh just write incremental small commits to begin with”, but that never seems viable to me. Code and design evolve as I build in ways that would make the review of the first 50 lines I commit on the project pointless since none were retained in the final product. So not only do I have to build this feature, and write 40 PR descriptions, and write an overall meta-PR-description for the change in aggregate, I also need to cut my finished code up into puzzle pieces for reviewers to mentally re-assemble.
Anyways, maybe for maintaining mostly-done features 50 lines is nice, but I’d much rather a new feature or larger change be a handful (1-3) of larger (500-2000 line) PRs possibly accompanied by pair-review/walkthrough discussion.
GravityLab|2 years ago
NickM|2 years ago
Now, yes, it can be a bit more work to split up big PRs into smaller standalone changes that can be merged in isolation, but it seems plausible that the benefits might outweigh the extra work in many cases. The whole idea of a version control system is to create a useful history of changes over time; making each individual changeset simpler and easier to understand seems like an admirable goal.
Draiken|2 years ago
I can make hundreds of 2 line PRs fixing typos and reformatting text or whatever else low stake change that are as small as this.
So, yeah, obviously those low-stake changes that don't really do much will be quick to review and merge. Does that make them any better? Why are we optimizing for review speed alone in the first place?
It's just silly statistics being thrown around. There's no correlation in lines of code or review speeds for value added.
A larger PR that actually does something and requires more thorough review will inevitably take longer. Does that mean it would be better to have 10 separate PRs with no connection between them, different reviewers and 10x the feedback loops? This for me is missing the forest for the trees.
This honestly reminds me of way back when Agile/Scrum was the new hot thing, and my company implemented it where we ended up having 10x the stories taking twice the time.
Local optimization on these sorts of metrics are pretty much always going to lead to perverse outcomes.
lakpan|2 years ago
digitalsushi|2 years ago
Same rat brain muscle that made me buy an extra 200,000 dollars of stuff I didnt really need on my home, but I spent 6 days at Home Depot fighting over which color of mustard to paint the guest bathroom
distortionfield|2 years ago
TomSwirly|2 years ago
Reviewing a 50-line code change in 60% of the time it takes to review a 250-line code change means the shorter code review takes _four times_ as long to review per line.
HPsquared|2 years ago
So not only is it more time per line, it's also more lines.
Basically the only benefit is PRs per day, which is... not useful in itself.
danpalmer|2 years ago
You could argue that shorter PRs are easier to review and therefore people are doing more review, and that should result in better changes (indeed the 15% fewer reverts supports this).
Or you could argue that shorter PRs are harder to review as they have less context, and therefore more time is taken but that a better output is not necessarily being reached.
I think both of these are entirely plausible, and the truth probably depends on other process factors.
tmjdev|2 years ago
Unless the metric is actually 40% faster per LOC but that's not specified.
RHSeeger|2 years ago
3cats-in-a-coat|2 years ago
It cheers me up that people so naive exist.
Size is contextual to the thing the PR regards. Maybe it's one char. Maybe it's 500 lines. Maybe it's 10k lines. Screw this.
okwhateverdude|2 years ago
lightbendover|2 years ago
lightbendover|2 years ago
yurishimo|2 years ago
It was tested in smaller chunks in a feature branch, then the larger feature was merged. There was some testing done on the huge feature branch, but not everything that was present in the smaller PRs.
In our case, this new "feature" was a complete overhaul of our business logic engine. Over time, we had essentially developed 3 or 4 different API versions for various business tasks, with a lot of logic being duplicated between them. We want to bring it all under one umbrella, and after about 8 years, we think that we have a good idea of what sort of abstraction can work for the next 8 years.
Now that the engine is merged, we need to start the grueling task of actually moving logic from the old APIs into it and removing them from our codebase. We spent about 6 months (between 2 devs) writing the engine and I suspect it will take 2 years to move all of the logic into the new engine.
Idk where I was going with this comment anymore, but needless to say, I don't often see 50 line PRs at work!
binsquare|2 years ago
There's too many facets to what makes a PR good that boiling it down to line count is silly.
Ex. Small code changes are required if the solution requires it - forcing 5 lines change to 50 lines would be considered wildly bad idea. On the other hand a 500 line PR to solve one problem suddenly broken down to 10 PRs makes the review far harder depending on the context.
the1024|2 years ago
This worked reasonably well in terms of forcing junior engineers to reason harder about what an atomic change should look like, but also impacted senior folk that were making atomic changes that encompassed broad areas of the codebase. We eventually reverted the change and relied on implementing CODEOWNERs to enforce senior devs to request changes when PRs got out of hand.
There's definitely nuance to enforcement of best practices across an entire organization; haven't really seen it done well although I've primarily worked at startups.
bvrmn|2 years ago
Changes were quite simple. 600 common lines are about rearranging big module into multiple smaller modules. 300 removed lines are technically could be in a separate PR, but it's some cleanup aftermath related to rearrangement.
SAI_Peregrinus|2 years ago
Draiken|2 years ago
PRs that are <50 lines are more often than not documentation, typos and small fixes in my experience.
Most real value add PRs end up bigger due to tests, documentation changes, and the actual change.
This for me is the typical claim that'll become an example of Goodhart's law when implemented. Random people that are not engineers will use this to claim "your PR should be 50 lines long" as some sort of absolute truth, when in reality the number of lines of code is absolutely irrelevant most of the time.
Context and details matter. Sometimes multiple small PRs are better. Sometimes a larger PR is better. It always depends. These absolutist claims that "this is better" is what leads to atrocious management practices that always end up getting summarized to a number, losing all of its meaning in the process.
Had we not we moved on from the lines of code metrics already decades ago? :facepalm:
karmakaze|2 years ago
For a large PR, I usually regroup my changes into smaller unit commits which can be reviewed commit by commit.
For an even larger (or more complex) PR, I'll make a set of stacked PRs which can be reviewed in pieces, each with a good description of what/why/how stuff is happening in that piece. Once all PRs have been approved, the lot of them can be deployed together.
I've at times had 3 PRs for a total of 3 line changes, because they were for a single table's schema migration picking new indexes and each had many factors to consider. The PR descriptions were of course much longer.
abroadwin|2 years ago
Not saying the claim is necessarily wrong, but it's also exactly what I'd expect given the source.
da39a3ee|2 years ago
avgcorrection|2 years ago
Showing some context per commit makes all the difference when I decide to do a first commit which fixes all formatting in the file that I’m visiting (for the real/substantive change).
tyleo|2 years ago
Anyways, I don't so much care about the size of a PR as much as I care about how understandable it is. That generally ends up being a few rules of thumb like:
1. If you change a common function name, change it everywhere but please don't do anything else in the same commit.
2. If you need to make a change in code you are depending on to accomplish a goal in your own code, try to break the change to the dependency into another commit. There is a little bit of an art here to both understand how to break things up and to not break them up too small.
3. Try to make your history meaningful rather than just having a bunch of random commits like "fix", "forgotten file", etc.
klodolph|2 years ago
At first I faced a lot of pushback from team members when I wanted them to break up the commits. So I caved. I just tried to give the feedback that I could give, and approved the PRs when they were “good enough, I guess”. They kept breaking in production and it would take a week to fix them. It was hard to tell what, in the commits, was broken. They were just so large.
And it’s not like these PRs were getting written quickly. People would spend two weeks or four weeks working on one PR, because they wanted to make one massive change that did everything and closed out a ticket.
At this point, I’ve made inroads and the team is sending out much smaller PRs, much more frequently. The PRs are getting reviewed, merged, and tested within an hour, and one developer can submit three or four PRs if they’re being productive that day.
I think the problem is that the cognitive load of making large changes is superlinear. A 2x larger change is not 2x as hard, but maybe 2.5x or 3x as hard. With large PRs, you spend way too much time just trying to understand what you are doing, as an author, or what you are looking at, as a reviewer.
tome|2 years ago
detinho|2 years ago
sithlord|2 years ago
oglop|2 years ago
zer8k|2 years ago
1. The "unit of work" is small enough to decompose into a 50 line PR. This is possible on larger codebases and less verbose languages. It can enforce over-DRY code which often leads to a lower universal ability to understand what is going on.
2. The team responsible for reviewing is responsive. I have never worked at a company where PR reviews happened quickly. Everyone is busy, everyone is overworked, and often times PRs are a chore that can be put off until later. Of course you can make some labyrinthine chain of small PRs all merging into one big PR but the problem still exists. I'd personally rather review a 300 line complete PR written well than 6 PRs that I have to constantly recall context on.
This advice follows the same logic as the asinine cyclomatic complexity rules. Yes, in theory they are sound. In practice, it's more of a flexible benchmark rather than a hard rule. But since someone inevitably writes YASL (yet another stupid linter) that enforces these types of things, an enterprising work-focused engineer will end up spending either more time fixing code to please the linter or more time exploiting loopholes in the rules.
Just write Good Code (TM) - whatever that looks like.
awestroke|2 years ago
hvs|2 years ago
bryanlarsen|2 years ago
charcircuit|2 years ago
Miraltar|2 years ago
mirekrusin|2 years ago
hyggetrold|2 years ago
wredue|2 years ago
The same goes for functions
Please stop listening to this garbage advice.
sebastianconcpt|2 years ago
User23|2 years ago
xkcd-sucks|2 years ago
> The highest volume coders and repositories have a median change size of only 40-80 lines.
This claim is rational under the Facebook axioms, so to speak.
siva7|2 years ago
> We flew down weekly to meet with IBM, but they thought the way to measure software was the amount of code we wrote, when really the better the software, the fewer lines of code.” — Bill Gates
agilob|2 years ago
nhggfu|2 years ago
stratigos|2 years ago
Skimming a PR is pure cognitive dissonance, dont do this and complain when your app turns into a dumpster fire.
oglop|2 years ago
progrus|2 years ago
armatav|2 years ago
Who wants to spend hours bothering with hardline process - you’re the programmer, you’re going to be the guy answering any questions on your pull; therefore you’re going to have to look at it yourself.
pestatije|2 years ago
svaha1728|2 years ago
tehnub|2 years ago
physicsguy|2 years ago
LudwigNagasena|2 years ago