top | item 17605371

Why Google Stores Billions of Lines of Code in a Single Repository (2016)

478 points| bwag | 7 years ago |cacm.acm.org

281 comments

order

hobls|7 years ago

I feel terrible for anyone who sees this and thinks, “ah! I should move to a monorepo!” I’ve seen it several times, and the thing they all seem to overlook is that Google has THOUSANDS of hours of effort put into the tooling for their monorepo. Slapping lots of projects into a single git repo without investing in tooling will not be a pleasant experience.

wirrbel|7 years ago

Same line of thinking, just different conclusions.

I feel terrible for anyone trying to run a company with open-source style independent repos. On a popular github project, you have MANY potential contributors that will tell you if a PR, or a release candidate break API compatibility, etc. There are thousands of hours in open source dedicated to fixing integration issues due to the (unavoidable) poly-repo situation.

Monorepos in companies are relatively simple. You need to dedicate some effort in your CI and CD infrastructure, but you'll win magnitudes by avoiding integration issues. Enough tooling is out there already to make it easy on you.

Monorepos' biggest problem in an org is the funding, as integration topics are often deprioritized by management, and "we spend 10k per year on monorepo engineering" for some reason is a tough sell for orgs, who seem to prefer to "spend 5k for each of the 5 teams so that they maintain their own CD ways and struggle integrating which incurrs another 20k that just is not explicitly labeled as such".

Developer team dynamics also play a role. I have observed the pattern now multiple times (N=3):

* Developers have a monolithic repo, that has accumulated a few odd corners over time. * The feeling builds up that this monolithic repo needs to be modularized. * It is split up into libraries (or microservices), this is kind of painful, but feels liberating at first (now finally John does not break my builds anymore) * Folks realize: John doesn't break my builds anymore, but now I need to wait for integration on the test system to learn if he broke my code, and sometimes I only learn it in production. * people start posting blog posts on monorepos

That pattern takes 2-3 years to play out, but I have seen it on every job I worked.

shados|7 years ago

Both monorepo or "micro repo" end up falling apart at scale without some devops work involved. Either will work if you only have a few dozen projects. Neither will work once you hit 10s of millions of lines of code.

But people seem to forget that it wasn't that long ago that git didn't exist, making multiple repos was a pain in the butt. Managing multiple repos locally was hell. Monorepos were the norm.

Then as the state of version control ramped up, and making repos became easy, and having so much code in one repo had performance issues (overnight CVS/SourceSafe/SVN pull on your first day at work anyone? Branches that take hours to create?), people started making repos per project. The micro-service fad made that a no-brainer.

Now, for companies like Facebook and Google, or really any company that wrote code before the modern days and has a non-trivial amount of it, switching was not exactly a simple matter. So they just poured their energy into making the monorepo work. They're not the only ones to do it either (though not everyone has to do it at Google, Facebook or Microsoft scale, obviously, so its a bit easier for most). And so it works. And then people forget how to make distributed repos work and claim things like "omg I have to make 1 PR per repo when making breaking changes!", as if it was a big deal or it wasn't a solved problem.

adamrt|7 years ago

We moved to a monorepo about 2 years ago and it has been nothing but success for us.

We have quite a few projects but only 4 major applications. Maybe it is that a few of our projects intertwine a bit so making spanning changes in separate repositories was a pain. Doing separate PRs, etc. Now changes are more atomic. Our entire infrastructure can be brought up in development with a single docker-compose file and all development apps are communicating with each other. I don't think we've had any issues that I can recall.

We are a reasonably small team though, so maybe that is part of it.

mr_tristan|7 years ago

I sense that Google invests much more in it's infrastructure then most companies make in revenue.

I've worked with monorepos, and I'd be loathe to recommend it as well; the combination of culture shift and tooling it takes to keep a monorepo system running makes most CD processes you see today look like child's play.

There is a lot of very good free software that supports most of the open source approach to CD these days; but very, very little freely available monorepo tooling. Just check out https://github.com/korfuri/awesome-monorepo - it's a quick read. I haven't found many other notably superior compilations. Compared with available OSS workflows and tooling, it's rather sparse, filled with bespoke approaches everywhere.

013a|7 years ago

I call this "Google Imposter Syndrome". Because Google (insert Facebook, Apple, Amazon, etc) has success with Monorepos (insert gRPC, Go, Kubernetes, React/Native, etc), it must be a great idea, we should do it. You see this everywhere. Also known as an Appeal to Authority.

My personal opinion: very few companies will hit a point where sheer volume of code or code changes makes a monorepo unwieldy. Code volume is a Google-problem. But every company will have problems with Github/Gitlab/whatever tooling with multiple repos; coordinating merges/deploys across multiple projects, managing issues, context switches between them, etc. And every company will also have problems with CI/CD in a monorepo.

Point being... there are problems with both, and there are benefits to both. I don't think one is right or wrong. I personally feel that solving the problems inherent to monorepos, at average scale, is easier than solving the problems inherent to distributed repos. The monorepo problems are generally internal technical, whereas the distributed repo problems are generally people-related and tooling outside of your control.

pcwalton|7 years ago

There's also the fact that monorepos have issues when you don't have one organization responsible for all the code. The Linux kernel and NetHack don't live in the same repository for good reason.

mnm1|7 years ago

Slapping a whole bunch of projects into multiple repos with dependencies isn't a pleasant experience either. What is the solution then? I certainly don't want to host my own npm/composer/maven/clojars repos or even use those dependency managers to manage my own code which constantly changes and relies on multiple libraries both on the backend and frontend. I've tried this and, at least with a small team of two, it's not a pleasant experience at all. So how can I solve this problem? Cause the monorepo is very enticing after dealing with multiple repos and multiple dependencies pulled through dependency managers that clearly do not do well with dependencies that are constantly in flux.

EnderMB|7 years ago

I've seen this a few times in the .NET world, mainly as a carry-over from Subversion when we had moved to Mercurial and git.

Some mad genius in a company will write a fuck-ton of helper classes and utilities that take the heavy lifting out of everything remotely hard, to the point where you almost never need to touch a third-party API for a CMS, email send service, or cloud-hosting provider. Instead of supplying these as private NuGet packages to be installed into an application, they sit in solutions in their entirety, in case they are needed. That application then goes to a new developer team, and they have zero idea why there are millions of lines of code and dozens of projects for a basic website that doesn't really seem to do anything.

It's a nice idea, but it has resulted in some very tightly coupled applications. I remember one time where a new developer changed some code in one of the utilities that handled multi-language support, and for some reason our logs reported that the emails were broke.

rco8786|7 years ago

Are you suggesting that there’s a solution to managing large amounts of code that doesn’t involve large amounts of tooling?

hobs|7 years ago

heh, thousands, its probably at least an OOM greater, if not two.

nine_k|7 years ago

Most folks who consider a monorepo don't have billions of lines of code, and often not even millions.

Linux kernel is a monorepo.

acomjean|7 years ago

But what are the alternatives to the monorepo in git?

All the ways of splitting code up and deploying multiple git repos for one project seem terrible.

keerthiko|7 years ago

What I can advise against is repo partitioning prematurely. I have been on multiple teams that have thought "Oh this will be a common library for all our projects" or "this is a sample project" or "this is the android version and this is the iOS version" and split projects up into different repos, only to wind up with crazy dependencies between repos which have fallen out of sync or require another repo to be on a specific branch/hash to work correctly, causing all kinds of chaos. Split your repos by dependencies, and once your system architecture is kind of fleshed out. Just use branches on the same repo until then.

maxpert|7 years ago

Spot on! I've seen org wide mono repos at Microsoft and they had their custom tooling and build systems built on top of SourceDepot.

georgewfraser|7 years ago

I kinda disagree, we’re a dev team of 30, 3.5 years in, 150k lines of code and we’ve always had a monorepo. We had to switch from maven to bazel after about 2 years because test times got out of control; bazel has been about 50% more annoying than maven but the incremental builds work perfectly.

ma2rten|7 years ago

Google used Perforce for a very long time before they built their own version control system.

w_t_payne|7 years ago

Tooling is required for coordinating configuration management on multiple repositories too.

Also, why isn't such tooling available as open source? I'm trying to do my bit, but we could do with more effort being put into this, somehow.

foota|7 years ago

Really probably closer to millions of hours.

baybal2|7 years ago

Dumping all code in a single repo, even for a 30 man development shop was really tough. Doing so for a company of few thousands must be truly crazy.

I advice Google to replace the person in their internal IT who came up with that idea.

flukus|7 years ago

> and the thing they all seem to overlook is that Google has THOUSANDS of hours of effort put into the tooling for their monorepo

That's a huge understatement. They haven't just slapped a few scripts on top of git/svn, they've created their own proprietary scm to manage all of this. They've thrown more at this beast than most companies will throw at their actual product.

I'm also not convinced they haven't reinvented individual repositories inside this monorepo, it sounds like you can create "branches" of just your code and share them with other people with committing to the trunk, this is essentially an individual repository that will be auto deployed when you merge to master.

jgibson|7 years ago

Is it just me, or are a lot of people here conflating source control management and dependency management? The two don't have to be combined. For example, if you have Python Project X that depends on Python Project Y, you can either have them A) in different scm repos, with a requirements.txt link to a server that hosts the wheel artifact, B) have them in the same repo and refer to each other from source, or C) have them in the same repository, but still have Project X list its dependency of project Y in a requirements.txt file at a particular version. With the last option, you get the benefit of mono-repo tooling (easier search, versioning, etc) but you can control your own dependencies if you want.

edit: I do have one question though, does googles internal tool handle permissions on a granular basis?

edejong|7 years ago

The key here is reverse dependency management. “If I change X, what would influence this change?”.

This can be achieved with single repo better than multi-repo due to the completeness of the (dependency) graph.

Too|7 years ago

This is my biggest gripe in discussions like this as well, dependency management and source control are two completely different things. It should be convenient to use one to find the other but they should not necessarily be 1-1 coupled together with each other.

1. A single repo should be able to produce multiple artifacts. 2. It should be possible to use multiple repos to produce one artifact. 3. It should be possible to have revisions in your source control that don't build. 4. It should be possible to produce artifacts that depend on things not even stored in a repo, think build environment or cryptographic keys etc. An increase in version number could simply be an exchange of the keys.

justicezyx|7 years ago

Single repo is one design that coherently addresses source control management and dependency management.

The key is to let the repo be a single comprehensive source of data for building arbitrary artifacts.

paulddraper|7 years ago

> The two don't have to be combined.

They do have to be combined in some way, at least to be reproducible. Your requirements.txt example is one way of combining version control + dependencies: give code an explciit version and depend on it elsewhere by that version.

Google has chosen to do combine them in a different way, where ever commit of a library implicitly produces a new version, and all downstream projects use that.

> googles internal tool handle permissions on a granular basis?

Not sure what you mean...it's build tool handles package visibilty (https://docs.bazel.build/versions/master/be/common-definitio...). It's version control tool handles edit permissions (https://github.com/bkeepers/OWNERS).

fps_doug|7 years ago

It is very tempting to believe that a monorepo will solve all your dependency issues. If you have a project that's say pure python consisting of a client app, a server app, and then a dozen libs, that might actually be true, since you force everyone to always have the latest version of everything, and always be running the latest version. Given a somewhat sane code base and smart IDE, refactoring is really easy and and updates everything atomically.

In reality you often have different components, some written in different languages, at a certain size, not everyone has all the build environment set up and might be working with older binaries, and now it's just as easy to have version mismatches, structural incompatibilities, etc. So you need a strong tooling and integration process to go along with your monorepo. The repo alone doesn't solve all your problems.

bananarepdev|7 years ago

Maybe this is a reflection of modern tools using the version control system to store built artifacts, like npm and "Go get" do. Anyway, depending on the programming language, you can have a monorepo and still bind your modules with artifact dependecy, not necessarily depending on the code itself.

senozhatsky|7 years ago

Well, it's not so uncommon. For instance, OpenBSD, NetBSD repos are sort of monolithic. And, believe it or not, there are some advantages. For instance, let's take a look at OpenBSD 5.5 [0] release notes:

> OpenBSD is year 2038 ready and will run well

> beyond Tue Jan 19 03:14:07 2038 UTC

OpenBSD 5.5 was released on May 1, 2014. While Linux is still "not quite there yet" y2038-wise. y2038 is a very complex issue, while it may look simple - time_t and clock_t should be 64-bit. This requires changes both on the kernel -- new sys-calls interfaces [stat()], new structures layouts [struct stat], new sizeof()-s, etc. -- and the user space sides. This, basically, means ABI breakage: newer kernels will not be able to run older user space binaries. So how did OpenBSD handle that? The reason why y2038 problem looked so simple to OpenBSD was a "monolithic repository". It's a self-contained system, with the kernel and user space built together out of a single repository. OpenBSD folks changed both user space and kernel space in "one shot".

IOW, a monolithic repository makes some things easier:

a) make a dramatic change to A

b) rebuild the world

c) see what's broken, patch it

d) while there are regressions or build breakages, goto (b)

e) commit everything

[0] http://www.openbsd.org/55.html?hn

[UPDATE: fixed spelling errors... umm, some of them]

-ss

glandium|7 years ago

The reason why y2038 problem looked so simple to OpenBSD has little to do with "monolithic repository" and everything to do with "happy to break kernel ABI compatibility". You're saying as much yourself.

Monolithic repository might have been a tool that helped enforce it, but that's not what made it happen. It's the decision that ABI could be broken that did.

And that's also why it hasn't happened in Linux yet. Even if there was a monorepo containing all the open source and free software in the world (or at least, say, that you can find in common distros), the fact that there's a contract to never break the ABI makes it simply hard to do.

perlgeek|7 years ago

... and all the third-party software that was compiled for older versions of OpenBSD is now also broken by default.

The problem is that this approach only works if it is really a self-contained system. But OpenBSD isn't: it's a basis to run software, potentially third-party software. It's can't be a closed Universe and still be useful at the same time.

ChrisCinelli|7 years ago

Managing dependencies and versions across repos is a pain. Refactoring across repos is quite hard when your code spreads across repos considering the tree of dependencies.

Unfortunately Git checkout all the code, including history, at once and it does not scale to big codebases.

The approach that Facebook chose with Mercurial seems a good compromise ( https://code.fb.com/core-data/scaling-mercurial-at-facebook/ )

jsolson|7 years ago

As mentioned in the post (which is from 2016), Google has also been experimenting with Mercurial as a frontend (in collaboration with "contributors from other companies that value the monolithic source model"). As an avid user of that experiment at Google, it's seems to be going very well.

bluedino|7 years ago

>> Unfortunately Git checkout all the code, including history, at once and it does not scale to big codebases

A shallow clone can be helpful in cases like this

antt|7 years ago

Git works very well when the code is distributed. Which funnily enough is in the name. That we are using git as a centralized repository is a case of "Why do I need a screwdriver when I have a hammer?".

whack|7 years ago

Maybe I'm not cool enough to understand this, but I don't see the draw for monorepos. Imagine if you're a tool owner, and you want to make a change that presents significant improvements for 99.9% of people, but causes significant problems for 0.1% of your users. In a versioned world, you can release your change as a new version, and allow your users to self-select if/when/how they want to migrate to the new version. But in a monorepo, you have to either trample over the 0.1%, or let the 0.1% hold everyone else hostage.

Conversely, imagine if you're using some tools developed by a far off team within the company. Every time the tooling team decides to make a change, it will immediately and irrevocably propagate into your stack, whether you like it or not.

If you were at a startup and had a production critical project, would you hardcode specific versions for all your dependencies, and carefully test everything before moving to newer versions? Or would you just set everything to LATEST and hope that none of your dependencies decide to break you the next day? Working with a monorepo is essentially like the latter.

anyfoo|7 years ago

I've worked both at Google (only as an intern, though) and at other very very big companies with gargantuan code bases. At that scale, with software that is constantly in flux, pretty much the last thing you want is having to keep compatibility between several versions of a component. It's bad enough if you have to do it for external reasons, but if the only reason is so that "others in the company have a choice" then... no, just no.

You might think this ought to be trivial by having clear API contracts, but that's a) not how things work in practice if all code is effectively owned by the same, overarching entity and, more importantly, b) now you have an enormous effort to transition between incompatible API revisions instead of just being able to do lockstep changes, for no real gain.

Even if you manage to pull that off (again, for what benefit?), it will bite you that 1.324.2564 behaves subtly different from 1.324.5234 even though the intent was just to add a new option and they otherwise ought to have no extensional changes in behavior.

crazygringo|7 years ago

> But in a monorepo, you have to either trample over the 0.1%, or let the 0.1% hold everyone else hostage.

Nope. In a monorepo (like at Google), you're responsible for not breaking anyone else's code, as evidenced by their tests still passing.

So you never trample over the 0.1%. Instead you fix your code, or you fix their code for them -- which was probably due to your own bugs or undefined behavior in the first place. Or else you don't push.

And if you break their code because they didn't have tests? That's their problem, and better for them to learn their lesson sooner that later, because they're breaking engineering standards that they've been told since the day they joined. A monorepo depends, fundamentally, on all code having complete test coverage.

perfunctory|7 years ago

> Working with a monorepo is essentially like the latter.

Not really. In the dependencies analogy the author of the dependency has no way to test the dependee(s). While with monorepo this is exactly what you do, "the tooling team" will "carefully test everything" before "propagate into your stack" (and it doesn't have to be irrevocable).

perfunctory|7 years ago

>In a versioned world, you can release your change as a new version, and allow your users to self-select

Repeat this process multiple times and you end up with configuration/settings hell. Been there done that. It's not black and white but "trampling over the 0.1%" could be a sensible business/architectural decision. For example how do you imagine "google maps" users selecting when/how to migrate?

andrewfong|7 years ago

Not saying this is how Google does it, but a monorepo doesn't prevent you from having multiple versions of the same dependency. Ideally, with a monorepo, you could update 99% of your sub-packages to the latest version while still leaving the one alone.

IMAYousaf|7 years ago

Hello.

What sort of tooling differences would one expect for a monorepo vs. multiple repos?

Is that a factor of something intrinsic about having one big repo, or is that a factor of the scale of the type of organization that Google is?

Thanks.

makecheck|7 years ago

This is clearly detrimental to external projects such as Go packaging, since their own developers will never be looking at dependency problems in the same way as outside groups.

Monorepo also bugs me because there will always be some external package you need, and invariably it’s almost impossible to integrate due to years of colleagues making internal-only things assume everything imaginable about the structure and behavior of the monorepo. There will be problems not handled, etc. and it leads to a lot of NIH development because it’s almost easier in the end.

Also, it just feels risky from an engineering perspective: if your repository or tools have any upper limits, it seems like you will inevitably find them with a humongous repo. And that will be Break The Company Day because your entire process is essentially set up for monorepo and no one will have any idea how to work without it.

topspin|7 years ago

> This is clearly detrimental to external projects such as Go packaging

Indeed. Google's monorepo means the largest cohort of Go programmers in the world are mostly indifferent to composing packages in the usual (cpan/maven/composer/npm/nuget/cargo/swift/pip/rubygems/bower/etc) manner. Non-Google Go programmers have been left to schlep around with marginal solutions for years, although in the last few months we begin to see progress here[1]. This was the #1 discouragement I experienced when experimenting with Go.

Google's monorepo may be wonderful from Google's perspective but I don't think it's been a win for Go.

* yes I know some of these are also build systems and provide many other capabilities, some of which are arguably detrimental. Versioned, packaged, signed dependencies and thus repeatable build artifacts is the point.

[1] https://github.com/golang/go/issues/24301

robaato|7 years ago

What about Android and 800-1,000 git repos?!

Have seen the pain trying to manage that across larger teams (e.g. thousands of devs) - and no the "repo" tool is not sufficient.

tzhenghao|7 years ago

Having worked at different companies adopting both monorepo and the multiple repos approach, I find monorepo a better normalizer at scale in consolidating all "software" that runs the company.

Just like what many commenters here have mentioned, the monorepo approach is a forcing function on keeping compatibility issues at bay.

What you don't want is to end up in a situation where teams reinvent their own wheels instead of building on top of existing code, and at scale, I think the multiple repo approach tends to breed such codebase smell. [1] I'm sure 8000 repos is living hell for most organizations.

[1] - https://www.youtube.com/watch?v=kb-m2fasdDY

shiift|7 years ago

I really liked that talk! Lots of relevant information and I can definitely relate, working at a Amazon. Wouldn't say that we are hurt by all of the same problems (we have solutions that work very well for some of them), but we definitely are aware of them.

mlthoughts2018|7 years ago

One of my former managers had worked a long time at Google and was present for the advent of Google’s in-house tooling developed around their monorepo.

His account was that it was basically accidental, at first resulting from short term fire drills, and then creating a snowball effect where the momentum of keeping things in the Perforce monorepo and building tooling around it just happened to be the local optimum, and nobody was interested in slowing down or assessing a better way.

He personally thought working with the monorepo was horrible, and in the company where I worked with him, we had dozens of isolated project repos in Git, and used packaging to deploy dependencies. His view, at least, was that the development experience and reliability of this approach was vastly better than Google’s approach, which practically required hiring amazing candidates just to have a hope of a smooth development experience for everyone else.

I laugh cynically to myself about this any time I ever hear anyone comment as if Google’s monorepo or tooling are models of success. It was an accidental, path-dependent kludge on top of Perforce, and there is really no reason to believe it’s a good idea, certainly not the mere fact that Google uses this approach.

gefh|7 years ago

Do you wonder whether he is a reliable narrator?

haglin|7 years ago

Google's handling of their source code makes me wanna work there.

I don't like distributed version control systems with hundreds of repositories spread out. It makes management more complicated. I understand this is a minority view, but that is my experience. It was easier to work in a single Perforce repository than hundreds of Git or Mercurial repos.

djur|7 years ago

Distributed vs. centralized VCS has very little directly to do with many vs. monolithic repos. After all, git was originally developed for a project with a large monolithic repo. Distributed VCS and many small repos got popular around the same time, but that's partly coincidental (microservice architectures getting popular, npm community preferring extremely small libraries) and partly because of GitHub making it very cheap in money/time to have many git repos.

a-dub|7 years ago

It should be noted that the monolithic model is somewhat encouraged by the client mapping system in Perforce, which was Google's first version control system so it is unclear to me if this was deliberate or just a side effect of the best VCS of the time.

I also still have doubts around the value of a monorepo, in the article they claim it's valuable because you get:

Unified versioning, one source of truth;

Extensive code sharing and reuse;

Simplified dependency management;

Atomic changes;

Large-scale refactoring;

Collaboration across teams;

Flexible team boundaries and code ownership; and

Code visibility and clear tree structure providing implicit team namespacing.

With the exception of the niceness of atomic changes for large scale refactoring, I don't really see how the rest are better supported by throwing everything into one, rather than having a bunch of little repos and a little custom tooling to keep them in sync.

malkia|7 years ago

Incrementally monolithic CL number is also useful. You can mark quite a lot of things with it - not only binary releases, but other developments too (configuration files, etc.). At the end your binary "version" comprises of main base CL + cherrypicked individual CL's - rather than branch with these fixes - I guess one can encode this too with git/hg - by using sha hashes - but this becomes much bigger in terms of information, and human handling it.

I guess not very strong point, but using CL numbers (I'm working with perforce mostly these days) makes things easier. And having one CL monothonically increasing all over all source code you have even better - you can even reference things easier - just type cl/123456 - and your browser can turn it into a link. Among many other not so obious benefits...

ridiculous_fish|7 years ago

> Google's monolithic software repository, which is used by 95% of its software developers worldwide, meets the definition of an ultra-large-scale4 system, providing evidence the single-source repository model can be scaled successfully

This 95% number is the most surprising part of the article. That implies that the sum of engineers working on Android + Chrome + ChromeOS + all the Google X stuff + long tail of smaller non-google3 projects (Chromecast, etc) constitute only 5% of their engineers. Is e.g. Android really that small?

dlubarov|7 years ago

They must have meant that 95% of Google engineers use the monorepo in some capacity, even if the majority of their work is done in a different repo.

hyperpape|7 years ago

I don’t know how to parse the number, but 5% of a billion still leaves 50 million lines of code, or three Linux kernels worth.

dlp211|7 years ago

I think you're interpretation is incorrect. A better way to think of this is that those 5% of people work exclusively on those projects. I'd be very surprised to learn that only 5% of Google engineers work on those projects.

Too|7 years ago

That 95% is most likely more figurative than fact.

stevesimmons|7 years ago

My company has a 50m LOC Python codebase in a monorepo. It works really well, given the rate of change of thousands of developers globally. That is only possible because of the significant investment in devtools, testing and the deployment infrastructure.

Here is "Python at Massive Scale", my talk about it at PyData London earlier this year:

https://youtu.be/ZYD9yyMh9Hk

timkrueger|7 years ago

We work with an monorepo since Septemeber 2017. I wrote about the migration:

https://timkrueger.me/a-maven-git-monorepo/

Our developers like it, because they can use 'mkdir' to create a new component, search threw the complete codebase with 'grep' and navigate with 'cd'.

wrayjustin|7 years ago

> includes approximately one billion files

...

> including approximately two billion lines of code

_also_

> in nine million unique source files

I should insert a joke about how well the system would do if each source file contained more than two lines of code.

But seriously, this summary could use some work.

rpcastagna|7 years ago

Binary files (arbitrary example: images used for golden screenshots in tests) have no line counts and are likely skewing the numbers here -- in the way you're (logically) looking to interpret them at least.

From a system design perspective, being able to handle a large number of files regardless of type is an interesting challenge, as is being able to handle a large number of highly indexed text files. All three of those statistics seem potentially interesting for different audiences that might read this paper.

tsycho|7 years ago

It's not just devops that you need to pull off a large monorepo; the other big thing is a strong testing culture. You have to be able to rely on unit tests from across the code base being a sufficient indicator of whether your commit is good. AND a presubmit process that can compute which parts of the monorepo get affected by your diff, and run tests against them automatically before committing your diff.

Google not only has the above but also has a strong pre-submission code review process which catches large classes of bugs in advance.

vbezhenar|7 years ago

I've used monorepo for few small related projects and it worked just fine for me. Much easier to make related changes across several projects.

joe_fishfish|7 years ago

This is probably a stupid question, but I couldn't find an answer. Does this mean Google keeps all of its different products in all their different languages and environments in one repo? So like, Android lives in the same repo as Gmail, which is the same repo as all the Waymo code and the Google search engine code as well? That seems insane to me.

krackers|7 years ago

Android & chromium are kept outside the monorepo

p-schultz|7 years ago

Yes, exactly. The self-driving car is in there too.

growse|7 years ago

Why does that seem insane?

paulddraper|7 years ago

Version controlled repositories are like business offices.

You can have your entire company in one location, or the entire company in separate locations. The most important thing is the logical rather than physical organization: team structure, executive leadership, inter-org dependencies, etc. You can achieve autonomy and good structure with or without separate locations.

A single location reduces barriers, but at some point multiple locations can solve physical and logistical challenges. General rule of thumb is to own and operate office space in a few locations as possible, but at some point you have to take drastic measures one way or another.

(Notice that Google had to invent their own proprietary version control system just for their monorepo. And not even Google actually uses a single repo as the source of truth: e.g. Chromium and Android.)

paulie_a|7 years ago

Im sure properly organized it's okay, but from what I've seen it's mediocre at best, especially with legacy/technical debt it's a huge mistake.

Start breaking that repo apart, because it probably isn't very/hopefully depending on the debt that exists.

hayleox|7 years ago

One of the big advantages of the monorepo is actually that it prevents technical debt from accumulating. If a change somewhere else breaks your code, you can't put off dealing with it -- you are forced to fix the issue immediately.

alexeiz|7 years ago

> Trunk-based development. ... is beneficial in part because it avoids the painful merges that often occur when it is time to reconcile long-lived branches. Development on branches is unusual and not well supported at Google, though branches are typically used for releases.

This sounds like the SVN model to me where branches are cumbersome and therefore they are very rare. After getting used to the Git branching model where branches are free and merges are painless, it would be very hard to go back to the old development model without branches.

jbergknoff|7 years ago

How does CI work with a monorepo? Do you always have to run all the tests and build all the artifacts? Or are there nice ways to say "just build this part of the repo"?

dekhn|7 years ago

You specify targets. Just like using bazel: bazel build //tensorflow/blah/....

I maintain a small part of the monorepo, and it's really nice to be able say "Run every test that transitively depends on numpy with my uncommitted changes", so you can know if your changes break anybody who uses numpy when you update the version.

Personally I think it would be neat if there was an external "virtual monorepo" that integrated as-close-to-head of all software projects (starting at the root, that's things like absl and icu, with the tails being complex projects like tensorflow), and constantly ran CI to update the base versions of things. Every time I move to the open source world, I basically have to recompile the world from scratch and it's a ton of work.

dlubarov|7 years ago

It's flexible; presubmit tests can be configured per-directory. There's also an option to run all tests of packages that could be affected by a change based on the Blaze dependency graph.

If you're making changes to a package with tons of dependencies such as Guava, for a risky change you might want to run all affected tests, but for a minor change you might want to run just the standard unit tests. As a compromise, there's also an option to run a random sample of affected tests.

FartyMcFarter|7 years ago

For safe-looking changes, it's OK to only run a subset of the tests (usually including the tests that directly test the changed library).

For changes that are more likely to break distant code, you can run all tests (perhaps bundling together several changes in order not to overload the system).

Alternatively you can take the risk of breaking tests post-submit... this is not very good citizenship, but in some cases it might be reasonable (when the risk is small).

nicodjimenez|7 years ago

I have slight experience with both monorepos and smaller repos and I think they can both work. The advantage of smaller repos is that it forces different components to expose well designed API's. Bigger repos make sense for products and embedded software, smaller repos make sense for platforms build up of small services communicating on the internet.

djur|7 years ago

Smaller repos force different components to expose APIs, but I don't think it forces or even encourages the APIs to be well designed. In some cases, having work spread across multiple repos can impede iterative development, meaning that you risk half-assed or, uh, two-and-a-half-assed implementations.

Also, when someone's asking for review for a change that encompasses, say, a change to a service, a change to a client library for that service, and a change to 2-3 other services that use that client library, I know that I cringe a little when suggesting a change, knowing that to implement it is going to require a commit on all of these different repos, waiting for CI to run on each one, etc. I try to only use that impulse to counter the urge to bikeshed, but the temptation is there.

jorblumesea|7 years ago

Is this really relevant for anyone except for "google scale" companies? For most teams, managing 30-40 services backed by git repos isn't a huge task and doesn't cause many problems.

Is there mature tooling that helps teams manage this, or is this proprietary google magic tooling?

fastball|7 years ago

Most teams can probably get by with much fewer than 30-40 services. Unless you have 30-40 groups within your team.

testcross|7 years ago

I don't understand why gitlab/github/bitbucket don't provide better tools for monorepo. This is a topic pretty trendy. But there is absolutely no tools helping with control access, good ci, ...

malkia|7 years ago

What's missing in these is cross-reference, which is not possible without somewhat established BUILD system (caps "pun-intened") - e.g. like bazel/build, then a source code indexer, etc, etc.

This becomes very critical for doing reviews, since it allows you to "trace" things without running them, apart from many other things. For example large scale refactorings looking for usages of functions, and other examples like it.

Why githab/gitlab/etc. can't do it? Well because hardly there could be one encompassing BUILD system to generate correctly this index.

axaxs|7 years ago

Sorry, but as someone who has been in orgs that do both, mono repo is a mistake. Constant needs to pull unrelated changes before pushing, pipelines requiring to grab the whole repo for dependencies, etc. I understand the arguments for mono repo, but never think it's nothing that outweighs the cons.

robaato|7 years ago

Well those are issues around having a git mono repo - where the repo is the unit of change - you get it or you don't.

With mono repos such as SVN or Perforce you just work on whatever subset you want.

prepend|7 years ago

I love these articles. Is there a wiki or collection of detailed descriptions of large company tech practices that isn’t marketing blargh.

I read years ago about Google data ingest, locator process but neglected to bookmark so now can’t find the reference.

mlinksva|7 years ago

Me too. I don't know of a collection, but others can be found at https://ai.google/research/pubs/ https://research.fb.com/publications/ https://www.microsoft.com/en-us/research/search/?q&content-t... and similar (though only a small fraction give hints about at scale practices, and those would be neat to collect in one place).

Closely related to this post: just noticed a 2018 case study on Advantages and Disadvantages of a Monolithic Repository https://ai.google/research/pubs/pub47040

gervase|7 years ago

Should probably have a [2016] tag.

guessmyname|7 years ago

Indeed, but to be fair, the information in the article is based on several research papers from 2011 [1].

And I am 100% sure the idea of having a monolithic project is several years older than that.

I am grateful that the article is re-posted in multiple websites, because just the other day I was in an interview and, while doing my coding challenge, overheard the conversation of a young computer science graduate and another interviewer. The interviewer asked him to explain what was a monolithic repository and the benefits. This guy had no idea what the interviewer was talking about and right there I realized that what many of us take for granted terminology-wise in the IT world, will certainly be a foreign language to young students who are just entering the work force.

[1] http://info.perforce.com/rs/perforce/images/GoogleWhitePaper...

tflinton|7 years ago

A repo including configuration and data.

How about we stop considering google an engineering leader and just a search leader?

tflinton|7 years ago

A repo with configuration, secrets and data?

Can we stop considering google an engineering leader and just a search algorithm leader?

curtis|7 years ago

I think monorepos make a lot of sense when you're talking about millions of lines of code. I'm not at all sure they make sense when you're talking about billions.

gravypod|7 years ago

I don't think the number of linea matters. I think the interconnection of your code matters. If you have 2 sets of services that are completely uncoupled the having two monorepos for those two deployments make sense. If you can guarantee atomic changes across all services that interconnect you have the benefits monorepos give you.

jldugger|7 years ago

Well, this particular monorepo has two billion LoC. But it's not a git monorepo, which matters significantly.

fizixer|7 years ago

I don't care about that. For me this is incomprehensible:

Why the eff does Google have billions of lines of code in their repo?

I hope they are not counting revisions (e.g., if a single 1 million project has 100 revisions, that's 1 million, not 100 million).

I have heard that they do count generated code (so it's not all handwritten code). In that case again, I have two things to say:

- that's a bad metric. I could overnight generate a billion lines of code with each line a printf of number_to_word of numbers from 1 to a billion. They want to measure the size of the repo? They should tell us the gigabytes, terabytes etc. But when it's lines of code, it's cheezy and childish to blow up the measure by including lines of generated code.

- But more importantly, I hope the generated code is 90% or more of that repository. Because any less than that would mean that Google engineers have handwritten 100 million or more lines of code through out the lifetime of the company, in which case I have to ask: what bloated mess do you have on your hands? I thought you guys were the top engineers of the world.