I've usually kept a rule that you should avoid stacking, and if you must only one level deep. The fact that you have to stack in the first place typically suggests that PRs aren't being merged fast enough. Stacking in my personal experience usually leads to merge conflict hell as changes and PR suggestions get merged underneath you.
arxanas|3 years ago
Stacking PRs is like pipelining for CPUs. It's efficient under the hypothesis that there aren't too many invalidations/stalls. The linked tooling `git-branchless` (I'm the author) is aimed at reducing the impact of such an invalidation by significantly improving the conflict resolution workflows.
isoos|3 years ago
Depends on the team and the product. My personal approach is to have 2-3 larger things to work on, so while I wait for reviews on one, I can switch and work on the other. This usually means minimum 1-2 weeks of planned work, sometimes even more, without being blocked on reviews. If everything is blocked, then it is time for some code health cleanup, refactoring and fixing those TODOs that are just lingering around, and also nudging the reviewers...
JD557|3 years ago
Mostly because stacked PRs are usually not ready for review until the base PR is reviewed, and are probably going to get rebased and require some refactors. I've had cases where the base PR had so many changes that I just started over on a new branch.
I know that GitHub now has draft PRs, but I still think that (unless you really want someone to take a look at the draft while the base PR is not merged), it might be better to not make people waste time looking at code that might suffer heavy changes.
smaudet|3 years ago
You can also change the definition of done - something I try to do when a task gets too large, spin off new sub-tasks, etc. The first feature doesn't quite work right, but the merge differential is usually smaller for tweaks or bug-fixes than it is for major initial features.
You can spend a lot time planning how best to break up your work, and sometimes you may be able to do that successfully, but often times it you are constrained not by git but by your tracking tool (jira, trac), which impose somewhat artificial sizes on code tasks....
And really that's the whole thing stacked prs or commits are trying to solve, minimize the merge differential.
ignormies|3 years ago
1. Deprecate old feature + add opt-in support for replacement 2. Make replacement default with opt-out for old pattern 3. Completely remove old feature and the opt-out functionality
I can write the entire stack of diffs upfront, have them individually reviewed but still linked, and ensure they're merged in the correct order.
The bottleneck for merging isn't in the review process, but in the deprecation. It wouldn't make sense to land all three of these changes as fast as review/merge would allow; that would skip the deprecation period.
wpietri|3 years ago
kqr|3 years ago
wpietri|3 years ago
With software it can be harder to notice because you don't have to make room for it. But in essence it's the same deal; it's anything we have paid to create that isn't yet delivering value to the people we serve. Plans, design docs, and any unshipped code.
There are a lot of reasons to avoid inventory, but a big one is that until something is in use, we haven't closed a feedback loop. Software inventory embodies untested hypotheses. E.g., a product manager thinks users will use X. A designer thinks Y will improve an interface for new users. A developer thinks Z will make for cleaner, faster code.
Both large pull requests and stacked pull requests increase inventory. In the case of incorrect hypotheses, they also increase rework. I could believe that for a well-performing team stacked PRs are better than equally-sized single big PRs, in that they could reduce inventory and cycle time. But like you, I think I'd rather just go for frequent, unstacked, small PRs.
[1] e.g. https://kanbanize.com/lean-management/value-waste/7-wastes-o...
vlovich123|3 years ago
Small prs don’t need it of course but complex features benefit from shaking out things earlier. Commit more than 100 lines are really hard to review (lots of anecdotal and empirical research). If you’re not reviewing small commit by small commit, the reviews are easily missing things. A single PR that’s 800 lines adds review time to go commit by commit. If you can merge the non objectionable stuff, the reviewee gets to feel a some of forward progress and fewer merge conflicts (eg someone lands a refactor before some simple change of yours vs your simple change handed before and you made it the person refactoring their problem where it belongs)
bosie|3 years ago
I guess this is not true anymore post covid outbreak? Pretty sure a lot of companies would kill to have inventory of their raw materials right now...
chrisseaton|3 years ago
Unless PRs are merged instantly, I'm always going to be waiting after one PR is opened, before I can work on the next, unless I stack, aren't I?
Is your definition of 'fast enough' instantly? If not, how does this work?
majormajor|3 years ago
* you're working on the same part of the code
* you aren't working on the same part of the code
The second scenario is common but also trivial here since you can just have parallel branches going, so I'm gonna assume more the first - working on something that's building on top of what you just put up for review.
Let's say the review is done in 2 hours. If you're already done with the followup, IMO you may be erring too far on the side of "small PRs." If you aren't, you just rebase on top of whatever changes had to be made, if any, to the first one, once it lands on the primary branch.
If, on the other hand, the review isn't done for 2 days... then that's a PR turnaround time problem for sure.
But I strongly disagree with the people saying "multiple dependent PRs suggests the work wasn't split up properly" - there's nothing worse than a mega-PR of thousands of lines for the sake of doing a "single feature" all in one shot vs having to possibly pause and rebase periodically after review. It's even more painful when this mega-PR requires fundamental changes that could've been caught earlier, but now will take longer to adjust, and then will stay open for a while and likely result in merge conflicts as a result.
perrygeo|3 years ago
I've seen no solutions, only tradeoffs but I'm curious if anyone has a tried and true way to avoid this traffic jam scenario.
nightpool|3 years ago
delecti|3 years ago
californical|3 years ago
Also, trying to make units of work so that they don’t need to overlap like that can be useful too
wowokay|3 years ago
wpietri|3 years ago
Another way is that you next work on something different enough that it doesn't need to stack. E.g., reviewing pull requests.
gregmac|3 years ago
This often means first a code cleanup that doesn't change any functionality yet, but makes the later changes simpler - eg; removing dead code, removing unnecessary abstractions/interfaces/layers, upgrading external packages.
Sometimes I recognize this early, and will specifically make a branch for this. If there's changes from the review I'll rebase my next branch(es) on top.
Often I don't see this will be needed until later (or rather: I'm focused on the change itself and not the PR experience) and so I'll interactive rebase to put all the refactor commits first, and make a PR for those.
Both cases mean I already have a branch built off an unmerged PR. Sometimes even before the PR is published, actually. I don't see any particular problem with this. My development style and speed is independent of the speed PRs get merged; the ability to rebase makes this a total non-issue.
jillesvangurp|3 years ago
Like the Linux kernel for example. You don't get to push half finished work with projects like that. Actually, you don't get to push at all. There is only pull. Some branches exist for many months or even years and there could be entire teams collaborating on them. The one absolute certainty you have on such branches is that there is an absolutely insane amount of upstream changes all the time. The only way to stay on top of that is to merge those changes often.
You can think of the Linux kernel git as a decentralized network of stacked branches that have a few central people pulling changes from branches they have reviewed into their own branches with the one that Linus Torvalds maintains as the ultimate branch on which releases get tagged. The vast majority of changes land to his branch via multiple layers of other people, each with their own branches and each adding their own reviews. Effectively all changes Linus Torvalds integrates are stacked. And he doesn't integrate them unless they are stacked properly with nice clean histories.
You could say, git was explicitly designed to do stacked branches at scale. So, it's kind of ironic that people are re-discovering this as a thing. It always was intended to be used like this. It's been used like that since the very beginning.
ramraj07|3 years ago
computronus|3 years ago
The work of breaking up a big, inspired chunk of work into small pieces helps you learn more about it, and the perspective can reveal improvements that weren't obvious in the initial effort. You might notice those yourself, or reviewers may. The final outcome will end up overall better for it, so spending that time is worthwhile.
pizza234|3 years ago
sodapopcan|3 years ago
dan-robertson|3 years ago
1. Introduce new test exhibiting bug
2. Introduce bug fix and update the test
If 1 and 2 are reviewed together you have less evidence that the test actually shows the bug being fixed.
dtech|3 years ago
We actually have a rule that there must be 1 commit just introducing a test that fails on CI for bugfix PRs.
YZF|3 years ago
EDIT: at least in rebase workflows which is what I'm used to. I guess in merge workflows this works.
cornel_io|3 years ago
BerislavLopac|3 years ago
zmj|3 years ago
barrkel|3 years ago
mrkeen|3 years ago
Yes. Whenever you inspect a proposed solution, you should hopefully find the problem that it solves.