(no title)
agounaris | 5 years ago
Monorepos are not efficient. They are easier to manage when a team is small but as the team grows and you have more and more deliverables with separate versioning you are introducing control structures in your automation. Complexity explodes!
Anyway, all this does matter if you don't make any profit :)
marcinzm|5 years ago
The complexity exists whether it's a monorepo or many separate repos. A monorepo lets you encode that complexity as versioned code in your build system. Separate repos encode it across people's heads, wikis and who knows what else. Hiding complexity doesn't mean it doesn't exist, just that it will bite you 10 times as hard eventually.
jeffbee|5 years ago
elteto|5 years ago
luckydata|5 years ago
jeffbee|5 years ago
I like to look at Google's GitHub commit messages to get an idea of the pace of their revision history. Yesterday they committed something with a Piper revision of 311324901. A month ago it was 306514102, and a year ago it was 248381230. That's about 160k revision numbers per day.
tehlike|5 years ago
Disclaimer: ex-googler.
rantwasp|5 years ago
elteto|5 years ago
If you need to build N libraries and M executables then you are going to have N+M targets (assuming building for one arch only) whether you use a monorepo or not... Either way you are going to issue N+M build commands.
Also, in bazel you can do
to do a full build of all targets in a workspace. If they are not doing this but instead passing each target name individually then that probably means that they are only building a subset of everything, and even that subset is too much to pass in a single command line invocation. I'll grant you that it seems excessive to have that many targets, but again I don't see having so many targets as an explicit issue of the monorepo.jayd16|5 years ago