The main driver of success in either model is in the tooling and practices invested in it to make it work in an organization. Google is successful with their monorepo because they have invested in building (blaze), source control (piper, code search), and commit to always developing on HEAD. Multirepo is currently easier for most companies because most public tooling (git, package manager) is built around multirepos. One place I see multirepos fall over is awful dependency management practices internally and in open source. Many dependencies quickly become outdated and are not updated in cadence, slowing down writers and consumers. Better tooling can help here but an organization needs real discipline to stay on top of things.
When I started at Google the tooling was not very good, and monorepo was pretty painful. They used perforce, and it simply couldn't keep up. Commits could take minutes. Code review was also unbearably slow. Blaze didn't exist yet; just before I started they had tools that generated million+ line makefiles that everyone hated. So yeah, you need good tooling, but even Google didn't have it for the first ~10 years of its existence.
I wonder why nobody have made a good public monorepo offering similar to what Google have internally. Would probably be a hit at many companies since it fixes so many issues related to working in very large teams.
Monorepo shortcomings 1 and 2 seem like bullshit to me. Perforce, the popular monorepo at most companies I've worked at, supports access control. Monorepos do not prevent you from segmenting your code into modules and pushing binary/source packages into source control so that builds can avoid compiling everything(TiVo used to do this, and it worked well when you got the hang of it).
I feel like these debates are often fueled by false arguments. Either way you go, you're going to want to build support tools and processes to tailor your VCS to your local needs.
VCS access control are the wrong tool for solving the "people use code they shouldn't" complaint.
First, VCS ACLs will massively reduce the benefits you're supposed to get from a monorepo. How will you do global refactors in that kind of a situation? How does a maintainer of a library figure out how the clients are actually using it? (The clients must have visibility into the library, but the opposite it unlikely to be true.)
Second, let's say that I maintain a library with a supported public interface that's implemented in terms of an internal interface that nobody's supposed to use. How will VCS ACLs allow me to hide the implementation but not the interface? When they kick off a build, the compiler needs to be able to read the implementation parts to actually build the library. It can't be that the clients have access to read the headers but then link against a pre-build binary blob. At that point you don't have a monorepo, you've got multirepos stored in a monorepo.
The actual solution are build system ACLs. Not ACLs for people, but ACLs for projects. Anyone can read the code, but you can say "only source files in directory X can include this header" or "only build files in directory Y can link against this object file".
> Monorepo shortcomings 1 and 2 seem like bullshit to me.
It's a blogpost and the author didn't try to build a total and exhaustive formal system. These shortcomings are not absolute truth but actually they are true.
I've seen this multiple times: a small projects evolves over years into a monster. Engineers add new components and reuse any other components they may need creating horizontal links. At some point they feel like they lost their productivity and they blame monorepo because it's easy to create horizontal links in a typical monorepo. So, they try to build a multirepo flow and they spend a lot of effort, time and money trying to make it working. At some point they feel that their productivity is even worse than it was before because now they need to orchestrate things so they merge everything back.
Same applies not only to VCS flows, but to system design as well.
When we discuss monolith/microservices controversy all the monorepo/multirepo arguments may be isomorphically translated to that domain. What is better, monolithic app or a bunch of microservices? A role-based app of course: https://github.com/7mind/slides/blob/master/02-roles/target/...
All of the supposed flaws of a monorepo in this article are actually flaws of git. This is a very common phenomenon. I often joke there are two kinds of developers: those who prefer monorepos and those who have never used perforce.
Can you elaborate on “monorepos do not prevent you from checking packages into source control” and how that helps to avoid recompiling everything? Why would you check a package into source control anyway? Surely source control is for source code? And I lean toward monorepos, btw, but there are still lots of obstacles and monorepo proponents don’t tend to acknowledge them or offer clear suggestions for how to solve or workaround them.
I think that whether to use mono/multi repo depends on whether you're willing to dump money into updating everyting at once, or not. If not, monorepos are really a big hindrance. It's better to split on the project boundary (things that may have different development paces), and use git worktree for having different versions of libraries checked out for building/bundling.
It works fairly nicely with meson, as you can simply checkout a worktree of a library into a subprojects directory, and let individual projects move at their own paces even if you don't do releases for the libraries/common code.
It's not really clear why having to update every consumer in sync with library changes is beneficial. Some consumers might have been just experiments, or one off projects, that don't have that much ongoing value to constantly port them to new versions of the common code. But you may still want to get back to them in the future, and want to be able to build them at any time.
It's just easier to manage all this with individual repos.
One of the things I've done at a couple companies now is flatten multi into mono - it just simplifies everything, it's all deployed as one unit, so easier to track and do changes across different parts of the code base in unison.
I have typically left mobile iOS/Android in separate repos however - they have a different deployment cadence, so you need to manage breaking changes differently anyway.
There's a lot of people on here defending their current workflow, whatever that is.
I for one find it refreshing that people are willing to think about different workflows, even if they are different.
It feels like what is described is a cross between a good language package manager and git submodules. It's an interesting space to explore, because a lot of nice things come out of submodules, but it's not a proper package manager.
A proper dependency manager that puts code in a workspace and manages it as you are working on it in a non clunky way is not something we have right now and may be a game changer. Thanks for sharing to the authors.
I'm curious: how would most people here define monorepo vs multirepo?
On the surface, most people seem to think of a monorepo as a source control management system that exposes all source code as if it's a traditional filesystem accessed through a single point of entry. Multirepo, in contrast, seems to be about multiple points of entry.
But that's a superficial and uninteresting distinction. All the hard parts of managing code remain for both and, for a sufficiently large organization, you'll still need multiple dedicated teams to build tooling to make either work at scale. All the pros listed in the article need a team to make them work for either approach, and all the cons are a sign that you need a team to be make up for that deficiency for either approach.
Aesthetically a single point of entry appeals to me, in that it allows for a more consistent interface to code. But I'd go for good tooling above that in a heartbeat.
I've shifted to focusing on repo == team. If your organizational structure is to have many little teams that are independent from each other, then you build your source code management to reflect that.
I built my engineering staff to focus on any of the initiatives that my boss hands to me (changes week/week) - so we went monorepo so we could move between those projects/apps/programs quickly.
We knew that we didn't want to pay the maintenance cost just because microservices/multirepo was a buzzword AND we wanted future ventures to get faster (example: we solved identity for authn/authz once and now every app that needs it after can leverage it and we can upgrade identity and all of its consumers in one pull request).
It's easy to use a monorepo in a way that feels like a multirepo, and vice versa. I'm inclined to say that the defining difference is around versioning. To put it another way, can you choose to ignore that your dependencies have upgraded?
In a monorepo your builds are at the same point in time horizontally across all of your dependencies. You build together or not at all (though not necessarily at HEAD). In a multirepo you have the option to build against any (or some subset of) point-in-time snapshots of your dependencies on a dependency-by-dependency basis.
If you have a single monorepo that all of the code is in, but your build system allows you to specify what commit to build your dependency build targets at instead of forcing you to use the same commit as your changes, you actually have a multirepo. If you have a bunch of repos but you build them all together in a CI/CD pipeline that builds each at it's most recently released version then you actually have a monorepo.
Why not both? I've been using https://github.com/mateodelnorte/meta and having a great time so far, it's just that GitHub (and others) don't have a simple way to bundle multi-repo commits in pull requests.
I agree with Christian... Why not both? Lots of teams I interact with have great reasons for a monorepo which they admit requires some work in tooling and processes and claim they're successfully releasing software faster with less effort if their code lived in disparate repos. I believe teams must choose the appropriate patterns that work best for their architectures and situations.
Whats the current state of git submodules? It seems like you could get some of the benefits of mono-repos in that you can reference dependency projects directly like a mono-repo. You can, in theory, treat many projects like a single code base.
In one of my first jobs like 15 years ago at a large software company we had just moved to a monorepo.
It was introduced to counterbalance what many saw as a big mess. Result was a lot of process being introduced which slowed everything down, but that was probably necessary at that stage. To my knowledge the company keeps switching back and forth- but new projects that need to move fast typically are done independently still.
I would expect you need really good training in place to make it work. e.g. Microsoft uses a git monorepo for the Windows codebase; obviously that is not something you could just come in on and do a "git clone" as you might on a small project.
I bet you could address this with a third approach: metarepo. The metarepo is a repo that uses sub modules to combine your multi repo ecosystem into a simulated monorepo. The metarepo is what ultimately gets built and deployed—no versioned dependencies to manage. Local development usually happens at the multirepo level, and the metarepo is managed mostly via CI.
So, in a monorepo world, isn't it often that you have to deploy components together, rather than "it's easy to"? How are services deployed only when there has been a change affecting said service? Presumably monorepo orgs aren't redeploying their entire infrastructure each time there's a commit? Are we taking writing scripts which trigger further pipelines if they detect change in a path or its dependencies? How about versioning - does monorepo work with semver? Does it break git tags given you have to tag everything?
So many questions, but they're all about identifying change and only deploying change...
Each service has its own code directory, and there's one big "shared code" directory. When you build one service, you copy the shared code directory and the service-specific directory, move to the service-specific folder, run your build process. The artifact that results is that one service. Tagging becomes "<service>-<semver>" instead of just "<semver>". You may start out with deploying all the services every time (actually hugely simplifies building, testing, and deploying), but then later you break out the infra into separate services the same way as the builds.
> Are we taking writing scripts which trigger further pipelines if they detect change in a path or its dependencies
Unless one enforces perfect one-to-one match between repo boundaries and deployments, this is also an issue with multirepos.
In practice, it's straightforward to write a short script that deploys a portion of a repo and have it trigger if its source subtree changes and then run it in your CI/CD environment.
I worked in a big bank in the UK using monorepo "cuz Google uses it", error number 1, your not Google.
The clones were gigantic, Jenkins would timeout cloning the whole project when all it needed was a bunch of files.
Merge conflicts all over the place, but the best part, we had scripts on our pipeline literally removing folders after cloning the repo to avoid automatic inclusions of libs etc.
In my opinion separation of boundaries is one of those things that should t be mess with.
It depends on which VCS you use. Git for example, doesn't have any native support for hiding or protecting code in particular folders within the repository.
We do multi-repo. It makes it a little slower, cause we have to get commits into our common libs repos (there are two) before we can do app/product repos updated. Using the environment package manager (composer, nom, yarn) rather than git-sub-module helps a lot.
Amazon does multi-repo. I don't see what the problem or debate over this is. We seem to be handling it pretty fine despite a massive-scale SOA architecture.
The multi-repo pattern certainly meshes well with Amazon's team structure, and of course integrates well with the build system and deployment system, given that they were created around it. But "handling it pretty fine" seems like a stretch.
When last I was there things were finally beginning to burst at the seams. Platform architecture migrations were failing or being abandoned over too many untracked dependencies on specific versions of platform-provided libraries. (RHEL5, anyone?) Third-party had become a jungle of unmaintained libraries with dozens of versions that nobody ever upgraded, that may or may not have security vulnerabilities or known bugs, and many teams hadn't released new versions of their clients/libraries into Live for years in fear of breakage. The Builder Tools team was talking about giving up and abandoning both Brazil and Live as unsalvageable. Framework teams (Coral) were throwing their hands up in the air about how Coral-dependent services would not be able to upgrade to Java 11 without fixing a bunch of breaking changes that they would never agree to fix. The solutions being proposed to these problems by the Builder Tools team looked a lot like moving toward a monorepo, at least conceptually.
When I was there, they were migrating away from perforce because they could no longer scale perforce fast enough to meet demand. I've not seen this talked about much outside of Amazon.
It was also a huge day-to-day quality of life improvement for the users (the developers.) There are UX problems with git, but they pale in comparison to the UX problems with perforce which is truly unpleasant software.
CI and CD are more workflows than tools. It doesn't really matter what your repo setup is, you just adapt your workflows to it. On one project I work on we use a monorepo for a handful of microservices. We use standard GitHub flow, no special repo consideration for the CI.
For CD, we have scripts that ask what service you want to build, and they specifically package that service using the set of files & processes dedicated to that service. The build generates a versioned artifact. After that, repo doesn't matter at all, we're just moving service artifacts around.
The cons to multi repo are all anti patterns for microservices anyway. If you're doing microservices you shouldn't have build dependencies on other projects. The should only call eachother at a network level.
Calling eachother at network level is still a dependency.
(And even a build dependency if you use something like protobuff or other protocol description files)
That’s the part about monorepos I can’t quite wrap my mind around - yes deploying a single large change to many different systems simultaneously is cool in theory, but how does it actually pan out? Deployment is never instant, so any system-to-system breaking changes would cause a short downtime while everything deploys. In the world I operate in, that’s absolutely not acceptable.
Not that you can’t still make your changes backwards compatible with themselves. But if I’m going to have to deploy everything in two steps anyway, what’s the point?
I am a big fan of monorepos and I've worked on a few open source projects that have used mutli-repos and at some places that used a hybrid approach. I agree with some of the ideas this article has put into writing but I wanted to provide some pointers from my experience.
Some background: at my current place of employment I have 28 services, should be 30 in the next few days, and so I think my use current case is very representative of a small to medium monorepo. At my last job right before this one we had sort of a monorepo that was strung together with git submodules although each project was developed independently with it's own git repo+ci.
> Isolation: monorepo does not prevent engineers from using the code they should not use.
Your version control software does not prevent or allow your developers from using code they should not use. It is trivial to check in code that does something like this:
import "~/company/other-repo/source-file.lang" as not_mine;
Or even worse in something like golang:
import "github.com/company/internal-tool/..."
Because of this it is my opinion that it is impossible to rely solely on your source control to hide internal packages/source/deps from external consumers. That responsibility, of preventing touching deps, has to be pushed upwards in the stack either to developers or tooling.
> So, big projects in a monorepo have a tendency to degrade and become unmaintainable over time. It’s possible to enforce a strict code review and artifact layouting preventing such degradation but it’s not easy and it’s time consuming,
I think my above example demonstrates this is something that is not unique to monorepos. The level of abstraction that VCS' operate at is not ideal for code-level dependency concepts.
> Build time
Most build systems support caching. Some even do it transparently. Docker's implementation of build caching has, in my experience, been lovely to work with.
---- Multi repo section ----
> In case your release flow involves several components - it’s always a real pain.
This is doubly or tripply true for monorepos because the barrier of cross-service refactors is so low. Due to a lack of good rollout tooling most people with monorepos release everything together. I know my CI essentially does `kubectl apply -f`. Unfortunately, due to the nature of distributed compute, you have no guarantee that new versions of your application won't be seen by old versions (especially so of 0-downtime deployments like blue-green/red-black/canary). Because of this you constantly need to be vigilant of backwards compatibility. Version N of your internal protocol must be N-1 compliant to support zero-downtime deployments. This is something that new members of monorepo have a huge huge difficulty working with.
> It allows people to quickly build independent components,
To start building a new component all one must do is `mkdir projects/<product area>/<project name>`. This is a far lower overhead than most multi-repo situations. You can even `rm -r projects/<product area>/<thing you are replacing>` to completely kill off legacy components so they don't distract you while you work. The roll out of this new tool whet poorly? Just revert to the commit before hand and redeploy and your old project's directories, configs, etc are all in repo. Git repos present an unversioned state that inherently can never be removed f you want a source tree that is green and deployable at any commit hash.
--- Their solution ---
I accomplish the same tasks as a directory structure. As mentioned before if you just put your code into a `projects/<product area>/<project>` structure you can get the same effect they are going for by minimizing the directory layout in your IDE's file view. The performance hit from having the entire code base checked out is very much a non-issue for >99% of us. Very very few of us have code bases larger than the linux mainline and git works fine for their use cases.
Also, any monorepo build tool like Bazel, Buck, Pants, and Please.build will perform adequately for the most common repo sizes and will provide you hermetic, cached, and correct builds. These tools also already exist and have a community around them.
Cedricgc|6 years ago
gnachman|6 years ago
username90|6 years ago
OneMoreGoogler|6 years ago
Kidding aside, my point is Google recognizes obvious boundaries between e.g. their web stuff and android, and organizes their code accordingly.
alexnewman|6 years ago
01100011|6 years ago
I feel like these debates are often fueled by false arguments. Either way you go, you're going to want to build support tools and processes to tailor your VCS to your local needs.
jsnell|6 years ago
First, VCS ACLs will massively reduce the benefits you're supposed to get from a monorepo. How will you do global refactors in that kind of a situation? How does a maintainer of a library figure out how the clients are actually using it? (The clients must have visibility into the library, but the opposite it unlikely to be true.)
Second, let's say that I maintain a library with a supported public interface that's implemented in terms of an internal interface that nobody's supposed to use. How will VCS ACLs allow me to hide the implementation but not the interface? When they kick off a build, the compiler needs to be able to read the implementation parts to actually build the library. It can't be that the clients have access to read the headers but then link against a pre-build binary blob. At that point you don't have a monorepo, you've got multirepos stored in a monorepo.
The actual solution are build system ACLs. Not ACLs for people, but ACLs for projects. Anyone can read the code, but you can say "only source files in directory X can include this header" or "only build files in directory Y can link against this object file".
pshirshov|6 years ago
It's a blogpost and the author didn't try to build a total and exhaustive formal system. These shortcomings are not absolute truth but actually they are true.
I've seen this multiple times: a small projects evolves over years into a monster. Engineers add new components and reuse any other components they may need creating horizontal links. At some point they feel like they lost their productivity and they blame monorepo because it's easy to create horizontal links in a typical monorepo. So, they try to build a multirepo flow and they spend a lot of effort, time and money trying to make it working. At some point they feel that their productivity is even worse than it was before because now they need to orchestrate things so they merge everything back.
Same applies not only to VCS flows, but to system design as well.
When we discuss monolith/microservices controversy all the monorepo/multirepo arguments may be isomorphically translated to that domain. What is better, monolithic app or a bunch of microservices? A role-based app of course: https://github.com/7mind/slides/blob/master/02-roles/target/...
jumpingmice|6 years ago
jayd16|6 years ago
Sure you could just use a manyrepo style of dependency tracking in a monorepo but I think that's not exactly what the author is exploring.
weberc2|6 years ago
pricechild|6 years ago
Even though it's one project.
Even though they refuse to allow a release of a single component - it must all be released together without forwards/backwards compatibility.
I think most of of the time, the mono/multi debate is spoiled by people who feel they can have their cake and eat it too.
megous|6 years ago
It works fairly nicely with meson, as you can simply checkout a worktree of a library into a subprojects directory, and let individual projects move at their own paces even if you don't do releases for the libraries/common code.
It's not really clear why having to update every consumer in sync with library changes is beneficial. Some consumers might have been just experiments, or one off projects, that don't have that much ongoing value to constantly port them to new versions of the common code. But you may still want to get back to them in the future, and want to be able to build them at any time.
It's just easier to manage all this with individual repos.
mattbillenstein|6 years ago
I have typically left mobile iOS/Android in separate repos however - they have a different deployment cadence, so you need to manage breaking changes differently anyway.
djhaskin987|6 years ago
I for one find it refreshing that people are willing to think about different workflows, even if they are different.
It feels like what is described is a cross between a good language package manager and git submodules. It's an interesting space to explore, because a lot of nice things come out of submodules, but it's not a proper package manager.
A proper dependency manager that puts code in a workspace and manages it as you are working on it in a non clunky way is not something we have right now and may be a game changer. Thanks for sharing to the authors.
scarmig|6 years ago
On the surface, most people seem to think of a monorepo as a source control management system that exposes all source code as if it's a traditional filesystem accessed through a single point of entry. Multirepo, in contrast, seems to be about multiple points of entry.
But that's a superficial and uninteresting distinction. All the hard parts of managing code remain for both and, for a sufficiently large organization, you'll still need multiple dedicated teams to build tooling to make either work at scale. All the pros listed in the article need a team to make them work for either approach, and all the cons are a sign that you need a team to be make up for that deficiency for either approach.
Aesthetically a single point of entry appeals to me, in that it allows for a more consistent interface to code. But I'd go for good tooling above that in a heartbeat.
givehimagun|6 years ago
I built my engineering staff to focus on any of the initiatives that my boss hands to me (changes week/week) - so we went monorepo so we could move between those projects/apps/programs quickly.
We knew that we didn't want to pay the maintenance cost just because microservices/multirepo was a buzzword AND we wanted future ventures to get faster (example: we solved identity for authn/authz once and now every app that needs it after can leverage it and we can upgrade identity and all of its consumers in one pull request).
dodobirdlord|6 years ago
In a monorepo your builds are at the same point in time horizontally across all of your dependencies. You build together or not at all (though not necessarily at HEAD). In a multirepo you have the option to build against any (or some subset of) point-in-time snapshots of your dependencies on a dependency-by-dependency basis.
If you have a single monorepo that all of the code is in, but your build system allows you to specify what commit to build your dependency build targets at instead of forcing you to use the same commit as your changes, you actually have a multirepo. If you have a bunch of repos but you build them all together in a CI/CD pipeline that builds each at it's most recently released version then you actually have a monorepo.
ChristianBundy|6 years ago
punkdata|6 years ago
jayd16|6 years ago
I don't see it used very often though. Why not?
avisser|6 years ago
vogre|6 years ago
kerng|6 years ago
It was introduced to counterbalance what many saw as a big mess. Result was a lot of process being introduced which slowed everything down, but that was probably necessary at that stage. To my knowledge the company keeps switching back and forth- but new projects that need to move fast typically are done independently still.
Stevvo|6 years ago
philwelch|6 years ago
amelius|6 years ago
Can you have two metarepos, each with its own set of checked-out branches of the same original submodules?
jesse_m|6 years ago
intellix|6 years ago
ledneb|6 years ago
So many questions, but they're all about identifying change and only deploying change...
peterwwillis|6 years ago
smallnamespace|6 years ago
Unless one enforces perfect one-to-one match between repo boundaries and deployments, this is also an issue with multirepos.
In practice, it's straightforward to write a short script that deploys a portion of a repo and have it trigger if its source subtree changes and then run it in your CI/CD environment.
bechampion|6 years ago
kyrra|6 years ago
jayd16|6 years ago
swsieber|6 years ago
edoceo|6 years ago
solarengineer|6 years ago
https://docs.gocd.org/current/advanced_usage/fan_in.html
akhilcacharya|6 years ago
dodobirdlord|6 years ago
When last I was there things were finally beginning to burst at the seams. Platform architecture migrations were failing or being abandoned over too many untracked dependencies on specific versions of platform-provided libraries. (RHEL5, anyone?) Third-party had become a jungle of unmaintained libraries with dozens of versions that nobody ever upgraded, that may or may not have security vulnerabilities or known bugs, and many teams hadn't released new versions of their clients/libraries into Live for years in fear of breakage. The Builder Tools team was talking about giving up and abandoning both Brazil and Live as unsalvageable. Framework teams (Coral) were throwing their hands up in the air about how Coral-dependent services would not be able to upgrade to Java 11 without fixing a bunch of breaking changes that they would never agree to fix. The solutions being proposed to these problems by the Builder Tools team looked a lot like moving toward a monorepo, at least conceptually.
catalogia|6 years ago
It was also a huge day-to-day quality of life improvement for the users (the developers.) There are UX problems with git, but they pale in comparison to the UX problems with perforce which is truly unpleasant software.
sthomas1618|6 years ago
peterwwillis|6 years ago
For CD, we have scripts that ask what service you want to build, and they specifically package that service using the set of files & processes dedicated to that service. The build generates a versioned artifact. After that, repo doesn't matter at all, we're just moving service artifacts around.
nhumrich|6 years ago
likeliv|6 years ago
shhsshs|6 years ago
Not that you can’t still make your changes backwards compatible with themselves. But if I’m going to have to deploy everything in two steps anyway, what’s the point?
pshirshov|6 years ago
gravypod|6 years ago
Some background: at my current place of employment I have 28 services, should be 30 in the next few days, and so I think my use current case is very representative of a small to medium monorepo. At my last job right before this one we had sort of a monorepo that was strung together with git submodules although each project was developed independently with it's own git repo+ci.
> Isolation: monorepo does not prevent engineers from using the code they should not use.
Your version control software does not prevent or allow your developers from using code they should not use. It is trivial to check in code that does something like this:
Or even worse in something like golang: Because of this it is my opinion that it is impossible to rely solely on your source control to hide internal packages/source/deps from external consumers. That responsibility, of preventing touching deps, has to be pushed upwards in the stack either to developers or tooling.> So, big projects in a monorepo have a tendency to degrade and become unmaintainable over time. It’s possible to enforce a strict code review and artifact layouting preventing such degradation but it’s not easy and it’s time consuming,
I think my above example demonstrates this is something that is not unique to monorepos. The level of abstraction that VCS' operate at is not ideal for code-level dependency concepts.
> Build time
Most build systems support caching. Some even do it transparently. Docker's implementation of build caching has, in my experience, been lovely to work with.
---- Multi repo section ----
> In case your release flow involves several components - it’s always a real pain.
This is doubly or tripply true for monorepos because the barrier of cross-service refactors is so low. Due to a lack of good rollout tooling most people with monorepos release everything together. I know my CI essentially does `kubectl apply -f`. Unfortunately, due to the nature of distributed compute, you have no guarantee that new versions of your application won't be seen by old versions (especially so of 0-downtime deployments like blue-green/red-black/canary). Because of this you constantly need to be vigilant of backwards compatibility. Version N of your internal protocol must be N-1 compliant to support zero-downtime deployments. This is something that new members of monorepo have a huge huge difficulty working with.
> It allows people to quickly build independent components,
To start building a new component all one must do is `mkdir projects/<product area>/<project name>`. This is a far lower overhead than most multi-repo situations. You can even `rm -r projects/<product area>/<thing you are replacing>` to completely kill off legacy components so they don't distract you while you work. The roll out of this new tool whet poorly? Just revert to the commit before hand and redeploy and your old project's directories, configs, etc are all in repo. Git repos present an unversioned state that inherently can never be removed f you want a source tree that is green and deployable at any commit hash.
--- Their solution ---
I accomplish the same tasks as a directory structure. As mentioned before if you just put your code into a `projects/<product area>/<project>` structure you can get the same effect they are going for by minimizing the directory layout in your IDE's file view. The performance hit from having the entire code base checked out is very much a non-issue for >99% of us. Very very few of us have code bases larger than the linux mainline and git works fine for their use cases.
Also, any monorepo build tool like Bazel, Buck, Pants, and Please.build will perform adequately for the most common repo sizes and will provide you hermetic, cached, and correct builds. These tools also already exist and have a community around them.
[0] - https://docs.microsoft.com/en-us/azure/devops/learn/git/git-...