Last year I tried doing a side project with a talented ex-Google buddy who insisted we set up Bazel to replace my simple Makefile. Three weeks later it still wasn’t working on my Windows box. We had a mixed Python and C++ code base and I like to use MinGW64 gcc on Windows. He blamed Windows and tried to get me to switch to Mac (no thanks, lol) and eventually he lost interest and gave up. The project went on to win an OpenCV funded competition and became the basis of a startup — good job, GNU Make!
So the answer IMHO to “when to use Bazel” is “never” :)
In general, if someone wants to beef up the tooling in my project that I know is fine to begin with, I still say "I don't mind, go ahead, but I will continue using my old tooling until yours works with no regression." Half the time, they abandon it. I don't even know how Bazel works cause I've never felt enough pain with makefiles etc to research an alternative.
Also, Google tooling knowledge doesn't transfer very well to outside projects. Everything is special on the inside there. They use an internal version of Bazel called Blaze that's totally integrated with everything, with entire teams dedicated to the tooling, and only a few officially supported programming languages, so of course it works smoothly.
My experience is that Windows is a constant source of headaches if you're supporting development and builds on multiple platforms and trying not to just write every build-related thing twice, Bazel or no Bazel. Likelihood of it being a PITA goes up fast the more complex the build & tools (probably why Make did better than Bazel).
I assume it's OK if it's your only platform—though I got my programming start on Windows, with open-source languages, and distinctly recall how managing the same tools and builds got way easier and more reliable when I switched to Linux. Luckily WSL2 is getting semi-decent and can call out to Windows tools, so it's getting easier to standardize on one or another unixy scripting language for glue, even if some of your tools on Windows are native.
Part of the trouble with it, though, is that there are multiple ways to end up with some kind of linux-ish environment on it, and that all of them introduce quirks or oddities. You end up having to wrestle with stupid questions like "which copy of OpenSSH is this tool using?" It's probably less-bad if you're in a position to strictly dictate what Windows dev machines have installed on them, I suppose. Git is installed, but only with options X, Y, and Z checked, no other configurations allowed; WSL2 is installed and has packages A, B, and C installed; and so on, crucially preventing the installation of other things that might try to pile on more weirdness and unpredictability.
Those kinds of messes are possible on macOS or Linux or FreeBSD or what have you, but generally don't happen. I think it happens on Windows because every tool's trying to vendor in various Linux-compat dependencies rather than force a particular system configuration on users, or having to try to deal with whatever maybe-not-actually-compatible similar tools the user has installed. So they vendor them in to reduce friction and the rate of spurious bug reports. Basically, the authors of these tools are seeing the same problems as anyone trying to configure cross-platform development with Windows in the mix, and are throwing up their hands and vendoring in their deps, which compounds the problem for anyone trying to coordinate several such tools because there's a tendency for everything to end up with weird configurations that don't play well together.
I mean, Python (moreso its ecosystem) is notoriously difficult to get working w/ Bazel idiomatically. Bazel is advertised as a language agnostic system (and to be fair, it's better at it than, say, Buck), but in practice it tends to work better w/ stacks that already have some degree of hermeticity (Go, Java), whereas YMMV very much with the more loosey-goosey stacks (Python, JS).
IMHO, Bazel is a classic example of Conway's law[0], and it falls squarely in the "big corp" class of software. You have to be running into issues like 6+ digit CI compute costs or latency SLOs in hundreds-of-teams monorepos before Bazel really starts to make sense as a potential technical solution.
I've used bazel at medium size (100+ engineers, multiple languages) and tried to use it at a small size (20 engineers, almost all go with a tiny bit of C) and I think the "never" and "unless there's no other way" answers from this interview are pretty good.
I mean was there anything wrong about the makefile? Did your product need reproducible builds?
This is becoming more and more of a pet peeve of mine, about to become an outright annoyance; the adding or changing of tools without things actually improving, or the purported improvement not actually being relevant or even close to the most important thing that time should be spent on.
I had a Go project up until earlier this year, I could set it up myself. I picked makefiles to build it (not bazel, that would be overkill; not mage, that would mean more coding), and it worked just fine. I never had a compelling reason to move away from it.
Actually it was more than fine, it was a relief; my previous experience with tools like that had been Maven (where everything is XML and you need a plugin to do basic things like move or remove a file) and the Javascript ecosystem, from before everything was merged into package.json / nodejs-style. Dependency management too.
I used bazel for my latest project, mainly as an excuse to learn it. I ended up spending waaay too much time debugging bazel instead of working on my code, and I still can’t properly support windows because dependencies don’t build.
I will never use bazel again. Not worth the effort required, and doesn’t work out of the box as advertised.
That's like saying you should never build a house with concrete foundations because they take so long to dig and pour and the first layer of bricks doesn't need them anyway!
Get back to me when you work for a company with a monorepo that has to build and test everything in CI for every change (taking something like 200 CPU hours) because they didn't have the foresight to use Bazel.
You are talking about a two men project becoming a one man project. You are not the target of a complex build system. At that size, you would be fine with basically anything.
I’m really glade that Meson and Ninja exist however. I hate GNU Make like few things in my toolbox. M4 really is an awful language.
i think its more like for super small startups, the best move is always to keep things as simple as possible (like basic build/deploy scripts) and work more on the product.
Bazel is a fully reproducible and hermetic build system. A lot of painstaking work goes into it producing the exact same artifacts build after build. And that provides some interesting properties that you can leverage for artifact caching, deployments, and CICD.
We very happily runny a polyglot monorepo w/ 5+ languages, multiple architectures, with fully reproducible artifacts and deployment manifests, all deployed to almost all AWS regions on every build. We update tens of thousands of resources in every environment for every build. The fact that Bazel is creating reproducible artifacts allow us to manage this seamlessly and reliably. Clean builds take an hour+, but our GH self-hosted runners often complete commit to green build for our devs in less than a minute.
The core concept of Bazel is very simple: explicitly declare the input you pass to a rule/tool and explicitly declare the output it creates. If that can click, you're half way there.
> Bazel is a fully reproducible and hermetic build system.
Yes, and it's very important to note that Bazel does nothing to solve the problem about having a reproducible and hermetic runtime. Even if you think you aren't linking against anything dynamically, you are probably linking against several system libraries which must be present in the same versions to get reproducible and hermetic behavior.
This is solvable with Docker or exceptionally arcane Linux hackery, but it's completely missing from the Bazel messaging and it often leaves people thinking it provides more than it really does.
Having spent a great deal of time getting bazel set up as you describe, I feel you have given readers a misleading impression. Bazel does not come out of the box that way. It uses whatever toolchain is laying around on the host, by default. It builds with system headers and links with system libraries. It looks at your environment variables. You need to do a lot of surgery on the toolchain to make it hermetic and reproducible.
The concept of explicitly declared inputs and outputs is awesome. Though it's closed build and the necessity to define the builds down to the compiler level makes Bazel complex and breaks some developer workflows.
For this reason we are building a Bazel competitor with a less restrictive build environment which can also be applied to projects with lower than 1M line of code.
I've been using Bazel for side projects these days, including small retro game projects for old game consoles. The entry price was high, but it works so well I have a hard time imagining working without it.
For retro game projects, the core of your game might be written in C or C++, which you want cross-compiled. That's easily within reach of stuff like makefiles. But then I start adding a bunch of custom tooling--I want to write tools that process sprites, audio, or 3D models. These days I tend to write those tools in Go. I'm also working with other people, and I want to be able to cross-compile these tools for Windows, even though I'm developing on Linux or macOS.
My Bazel repository will download the ARM toolchain automatically and do everything it needs to build the target. I don't really need to set up my development environment at all--I just need to install Bazel and a C compiler, and Bazel will handle the rest. I don't need to install anything else on my system, and the C compiler is only needed because I'm using protocol buffers (Bazel downloads and compiles protoc automatically).
Author here. I wanted to get my head wrapped around when Bazel was an excellent build solution, so I interviewed six people with a lot of Bazel experience and picked their brains.
This somewhat long article hopes to answer questions about Bazel for future people in a similar position to me. If you know a lot of Bazel then you might not learn much but if you’ve vaguely heard of it and are not sure when its the tool that should be reached for I’m hoping this will help.
I particularly enjoyed the history aspect of it, these things give you broad contours of where pools of experience exist and what specific organizations were responsible for driving and advancing things.
I lost a month learning Bazel last year. Never again. Here’s a challenge: Create a Bazel built Angular library using only the publicly available documentation. Here, I’ll save you some time: You can’t.
What documentation exists is flawed and what isn’t flawed has massive holes. It was so bad that I had to ask a few Google employees if there was secret internal documentation somewhere. There isn’t, and, they almost all hated using it as well.
I went back to Make and had the whole repo building in an afternoon.
Bazel is a great idea but like most other Google OSS projects it doesn’t have strong enough documentation to form a community or enough of a community to create good docs.
Even if I brute forced my way into using Bazel, I couldn’t ask an employee to learn it.
We use Bazel’s rules_docker as well, and I would caution someone evaluating it with a note from out experience.
What Bazel does well (and as well as Bazel fits your use-case) Bazel does extremely well and is a reproducible joy to use.
But if you stray off that path even a tiny bit, you’re often in for a surprisingly inexplicable, unavoidable, far-reaching pain.
For example, rules_docker is amazing at laying down files in a known base image. Everything is timestamped to the 1970 unix epoch, for reproducibility, but hey, it’s a bit-perfect reproduction.
Need to run one teensy executable to do even the smallest thing that’s trivial with a Dockerfile? Bazel mist create the image transfer it to a docker daemon, run the command, transfer it back... your 1 kb change just took 5 minutes and 36 gb of data being tarred, gzipped, and flung around (hopefully-the-local) network.
It may not be a dealbreaker, and you may not care, but be forewarned that these little surprises creep up fairly ofen from unexpected quarters!
Edit: after 2-ish years of Bazel, I would say that for 99% of developers and organizations, the most likely answer is "never".
I worked at Coinbase for four years, where Bazel is the build system of choice.
It's so so much worse than the homegrown software it replaced.
Hermetic sounds nice in theory, but doesn't actually matter and comes with a massive cost.
1. Bazel is slow. Like really fucking slow compared to your native build tools. Startup time can be insane if you use multiple rules (languages) in your repo. There's a great feeling when you get to work in the morning to build your go project but you have to install the latest python, nodejs, and ruby toolchains because someone updated them on master. The cache works well, except with a 1000 devs something will always be invalidated on master.
2. The documentation sucks. It's written to explain concepts with how things work, with no examples of how to actually complete tasks you care about. Of course that wouldn't help either because the bazel setup you're using is heavily customised.
3. Everyone on your team now has to learn yet another DSL, and a bunch of commands to run. I would easily spend 5 hours per week either waiting for, or debugging some issue in bazel.
All this for what benefit? Everyone's on the same version of a dependency? Not even sure this is a desirable property.
Also the monorepo is slow as jelly to work with and many tools or editors struggle to open it. Good luck getting code completion to work well too.
It's one of those ideas that are nice in theory, but awful in practice. It's possible that with enough manpower you may be able to use it effectively, but we certainly were not.
My take - Avoid Bazel as long as you can, for most companies the codebase is not big enough to actually need distributed builds, if you've hit this problem Bazel is probably the best thing you can do today, if you're that big you can probably spare the few dozen headcount needed to make Bazel experience in your company solid.
Bazel takes on dependency management, which is probably an improvement for a C++ codebase where there is no de-facto package manager. For modern languages like golang where a package manager is widely adopted by the community it's usually just a pain. e.g Bazel's offering for golang relies on generating "Bazel configurations" for the repositories to fetch, this alternative definition of dependencies is not what all the existing go tooling are expecting, and so to get the dev tooling working properly you end up generating one configuration from the other having 2 sources of truth, and pains when there's somehow a mismatch.
Bazel hermeticity is very nice in theory, in practice many of the existing toolchains used by companies that are using Bazel are non-hermetic, resulting in many companies stuck in the process of "migration to Bazel remote execution" forever.
Blaze works well in Google's monorepo where all the dependencies are checked in (vendored), the WORKSPACE file was an afterthought when it was opensourced, and the whole process of fetching remote dependencies in practice becomes a pain for big monorepos (I just want to build this small golang utility, `bazel build //simple:simple` and you end up waiting for a whole bunch of python dependencies you don't need to be downloaded).
And this is all before talking about Javascript, if your JS codebase wasn't originally designed the way Bazel expects it you're probably up for some fun.
I find Neil Mitchell's categories of small[1], medium[2], and huge[3] build systems useful. Blaze is absolutely fantastic as a huge build system, as it can correctly specify the exact semantics of systems with lots of components and complexities such as cross-language bindings, autogenerated files, etc. If you are building a huge system, then a small or medium build tool just won't be up to the task. At that kind of scale, somebody is going to be responsible for the build system working, possibly as a full time job, and it makes sense to tap somebody who understands how Bazel works and knows how to put those ideas into practice.
Conversely, as many comments here observe, it is terrible as a small build system. There, you want to be able to get started quickly and pull in dependencies without thinking too hard. A simple but easy to understand approach (even one based on make) might work. This is just a different problem than what Bazel solves.
My personal opinion (and I should emphasize, I am absolutely not speaking for anyone here, least of all Google) is that the best way forward is to take the ideas of Bazel (hermetic and deterministic builds) and package them as a good small build system, perhaps even compatible with Bazel so you don't have to rewrite build rules all the time. I also think compilers and tools can and should evolve to become good citizens in such a world. But I have no idea how things will go, it's equally plausible the entire space of build systems will continue to suck as they have for decades.
I completely agree with this, having spent an awful lot of time with both Bazel and Make. There is a much tighter, cleaner, simpler build system within Bazel struggling to get out. A judicious second take at it with a minimal focus, while taking some of the best ideas, could be wildly successful I think.
It's on my list of things that I will inevitably never get to.
> the best way forward is to take the ideas of Bazel (hermetic and deterministic builds) and package them as a good small build system, perhaps even compatible with Bazel so you don't have to rewrite build rules all the time.
Personally, I've found Bazel's tooling and dependency management to be extremely aggressive at pushing you to online-only development as your project scales in size. A company I worked for that used it lost multiple person-days for every engineer they had when Covid hit and the VPN went to crap.
It's great at being able to offload work to a remote server, but in my opinion that should never be the only way you can get work done. Local development should always be the default, with remote execution being an _option_ when available.
I tried using Bazel a while back but immediately ran into a few issues.
The existing codebase I was working with did not lay out its dependencies and files in the manner expected by Bazel so dealing with dependency/include hell was frustrating.
Then there was a large portion of the project that depended on generated code for its interfaces (ie. similar to protobuf but slightly different) and trying to dive into the Bazel rules and toolchain with no other Bazel experience was not fun. I attempted to build off of the protobuf implementation but kept finding additional layers to the onion that didn't exactly translate to the non protobuf tooling. The project documentation seemed out of date in this area (ie. major differences in the rules engine between 1.0 and subsequent versions) and I couldn't find many examples to look at other than overly simplified toy examples.
All in all a frustrating experience. I could not even get far enough along to compile the interfaces for the project.
My experience with Bazel at a startup (also used Blaze at Google):
The good:
- Amazing for Go backends. I can push a reproducible Docker image to Kubernetes with a single Bazel command. I can run our entire backend with a single command that will work on any developer's box.
- Amazing for testing. All of our backends tests use a fully independent Postgres database managed by Bazel. It's really nice not having to worry about shared state in databases across tests.
- We can skip Docker on macOS for development which provides on the order of a 10x speedup for tests.
- BuildBuddy provides a really nice CI experience with remote execution. Bazel tests are structured so I can see exactly which tests failed without trawling through thousands of lines of log output. I've heard good things about EngFlow but BuildBuddy was free to start.
- Really nice for schema driven codegen like protobufs.
The bad:
- Bazel is much too hard for TypeScript and JavaScript. We don't use Bazel for our frontend. New bundlers like Vite are much faster and have a developer experience that's hard to replicate with Bazel. Aspect.dev is doing some work on this front. One large hurdle is there's not automatic BUILD file dependency updater like Gazelle for Go.
- Windows support is too hard largely because most third party dependencies don't work well with Windows.
- Third party dependencies are still painful. There's ongoing work with bzlmod but my impression is that it won't be usable for a couple of years.
- Getting started was incredibly painful. However, the ongoing maintenance burden is a few hours per month.
> Bazel is much too hard for TypeScript and JavaScript. We don't use Bazel for our frontend. New bundlers like Vite are much faster and have a developer experience that's hard to replicate with Bazel. Aspect.dev is doing some work on this front. One large hurdle is there's not automatic BUILD file dependency updater like Gazelle for Go.
I built this[0] for the Bazel + yarn setup that we use at Uber. We currently manage a 1000+ package monorepo with it.
> I can run our entire backend with a single command that will work on any developer's box.
Curious wouldn't `go run` give you the same? pure go code is supposed to be portable, unless you have cgo deps I guess?
> I can push a reproducible Docker image to Kubernetes with a single Bazel command.
That's definitely an upside over what would otherwise would probably default to a combination of Dockerfiles and scripts/Makefiles, does it worth bringing in the massive thing that is Bazel? depends I guess.
I'm curious: would you say your experience with golang IDEs / gopls is degraded? did you do anything special to make it good? I often feel like development is more clunky and often I just give up on the nice-to-haves of a language server e.g often some dependencies in the IDE aren't properly indexed, I can probably get Bazel to do some fetching, reindex and get it working, but it will take 3-4 minutes and I just often choose to live with the thing appearing as "broken" in the IDE and getting less IDE features.
I migrated a monorepo including C++, Kotlin, Java, TypeScript and Python to Bazel. It's no small feat, and the DX varies widely across languages and platforms, but it's absolutely worth it. `bazel test //...` from a fresh clone builds & tests everything including gRPC/protobuf code generation, custom code generation, downloading packages, toolchains and linters, dependencies between langs, test suites with coverage across multiple languages.
Integration testing is a breeze through data dependencies. The reproducibility guarantees means we can reference container image SHAs in our Terraform and if the image didn't change the deploy is a no-op.
Bazel is an outstanding build system that handily solves a lot of practical problems in software engineering. Not just "at scale". Practical problems at any scale.
Regardless of whether you should use Bazel or not, my hope is that any future build systems attempt to adopt Bazel's remote execution protocol (or at least a protocol that is similar in spirit):
Tools like Earthly (and other BuildKit HLB frontend languages) will help more teams get some of the main benefits of Bazel with a lower bar of complexity. If you already know how to write Dockerfiles and Makefiles, you can write Earthfiles without much additional learning necessary. It provides a build that is repeatable, incremental, self-contained, never dirty, shared-cacheable, et cetera.
There’s an issue I reported (along with a proof of concept fix) over 4 years ago, that has yet to be fixed: building a mixed source project containing Go & C++ & C++ protocol buffers results in silently broken binaries as rules_go will happily not forward along the linker arguments that the C++ build targets (the protobuf ones, using the built in C++ rules) declare.
Not very confidence inspiring when Google’s build system falls over when you combine three technologies that are used commonly throughout Google’s code base (two of which were created by Google).
If you’re Google, sure, use Bazel. Otherwise, I wouldn’t recommend it. Google will cater to their needs and their needs only — putting the code out in the open means you get the privilege of sharing in their tech debt, and if something isn’t working, you can contribute your labor to them for free.
Does protobufs rely on static initialization or did you link the wrong bug? That would be extremely strange because there’s generally a disallowance of any / all non trivial static initialization in library code Google-wide, especially for protobufs.
Bazel is a great tool, but as mentioned in the article the support from Google is rather limited. If bazel ever got to the level Terraform is with providers it could really take off. Something I'd love to see is a .bazel_mod file where you could pull in other bazel workspaces (similar to go.mod) and reference packages within those workspaces.
The thing is, something like Bazel only pays dividends on a large complex project. When you're starting from scratch it's unlikely you'll have any problems that Bazel helps to solve, but you'll be spending a lot of time on the ceremony of setting it up.
One big gotcha that folks need to appreciate is that you don't get the full power of bazel just by getting baseline bazel to work and build things. You need to really be sensitive to your physical dependencies and not lump similar code together just because they play a similar role and are convenient to place in the same file rather than having that same source sharded out to distinct files.
You need to partition source into all its relevant pieces and lump no more functionality in that source than is needed by clients. Bazel encourages the most minute fine grained resolution of dependencies because if you do that and manage it right you unlock the power of the cache. Once you successfully partition all the bits of software to their segregated sections of the build graph DAG then you're positioned to really leverage the test result cache and can start bypassing huge expensive tests that are completely irrelevant to the body of code you're developing on.
Bazel is not just a build toolchain it is a test toolchain as well. The problems it solves with tests with respect to running only relevant & applicable tests to the changes being pushed is what many people have dreamed about before or came up with some crude solution to approximate the dependencies. If my test did not print out `(cached)` I know something in the dependency graph was disturbed in some way that may be completely not obvious to me, especially in a gigantic repo.
Bazel query is also worth mentioning to visualize dependencies with its xml output and which can also enforce architectural barriers asserting empty `somepath` queries between two targets that should not meet in the middle somewhere.
It would be neat one day if bazel integrates in some semantic logical level of dependency checking beyond the physical but that may be too expensive of an operation.
[+] [-] trzy|3 years ago|reply
So the answer IMHO to “when to use Bazel” is “never” :)
[+] [-] hot_gril|3 years ago|reply
Also, Google tooling knowledge doesn't transfer very well to outside projects. Everything is special on the inside there. They use an internal version of Bazel called Blaze that's totally integrated with everything, with entire teams dedicated to the tooling, and only a few officially supported programming languages, so of course it works smoothly.
[+] [-] yamtaddle|3 years ago|reply
I assume it's OK if it's your only platform—though I got my programming start on Windows, with open-source languages, and distinctly recall how managing the same tools and builds got way easier and more reliable when I switched to Linux. Luckily WSL2 is getting semi-decent and can call out to Windows tools, so it's getting easier to standardize on one or another unixy scripting language for glue, even if some of your tools on Windows are native.
Part of the trouble with it, though, is that there are multiple ways to end up with some kind of linux-ish environment on it, and that all of them introduce quirks or oddities. You end up having to wrestle with stupid questions like "which copy of OpenSSH is this tool using?" It's probably less-bad if you're in a position to strictly dictate what Windows dev machines have installed on them, I suppose. Git is installed, but only with options X, Y, and Z checked, no other configurations allowed; WSL2 is installed and has packages A, B, and C installed; and so on, crucially preventing the installation of other things that might try to pile on more weirdness and unpredictability.
Those kinds of messes are possible on macOS or Linux or FreeBSD or what have you, but generally don't happen. I think it happens on Windows because every tool's trying to vendor in various Linux-compat dependencies rather than force a particular system configuration on users, or having to try to deal with whatever maybe-not-actually-compatible similar tools the user has installed. So they vendor them in to reduce friction and the rate of spurious bug reports. Basically, the authors of these tools are seeing the same problems as anyone trying to configure cross-platform development with Windows in the mix, and are throwing up their hands and vendoring in their deps, which compounds the problem for anyone trying to coordinate several such tools because there's a tendency for everything to end up with weird configurations that don't play well together.
[+] [-] lhorie|3 years ago|reply
IMHO, Bazel is a classic example of Conway's law[0], and it falls squarely in the "big corp" class of software. You have to be running into issues like 6+ digit CI compute costs or latency SLOs in hundreds-of-teams monorepos before Bazel really starts to make sense as a potential technical solution.
[0] https://en.wikipedia.org/wiki/Conway%27s_law
[+] [-] dbt00|3 years ago|reply
[+] [-] thayne|3 years ago|reply
But where it does work well, is when you have a large complex codebase in a monorepo, and you need reliable caching to keep build times down.
[+] [-] rawoke083600|3 years ago|reply
To take it a step further. I never left the "comfort" of bash-build and bash-deploy scripts.
In most of my projects (pro and personal) there is a deploy-aws-prod.sh and deploy-aws-dev.sh (or some variation)
None is longer than a 10-20 bash commands.
These "projects" are webservice, ml-models, batch-processing(think distributed clusters)
It's ugly and perfect-enough at the same time.
YMMV
[+] [-] Cthulhu_|3 years ago|reply
This is becoming more and more of a pet peeve of mine, about to become an outright annoyance; the adding or changing of tools without things actually improving, or the purported improvement not actually being relevant or even close to the most important thing that time should be spent on.
https://mcfunley.com/choose-boring-technology is a good starting point for moving away from this mindset.
I had a Go project up until earlier this year, I could set it up myself. I picked makefiles to build it (not bazel, that would be overkill; not mage, that would mean more coding), and it worked just fine. I never had a compelling reason to move away from it.
Actually it was more than fine, it was a relief; my previous experience with tools like that had been Maven (where everything is XML and you need a plugin to do basic things like move or remove a file) and the Javascript ecosystem, from before everything was merged into package.json / nodejs-style. Dependency management too.
[+] [-] adastra22|3 years ago|reply
I will never use bazel again. Not worth the effort required, and doesn’t work out of the box as advertised.
[+] [-] IshKebab|3 years ago|reply
Get back to me when you work for a company with a monorepo that has to build and test everything in CI for every change (taking something like 200 CPU hours) because they didn't have the foresight to use Bazel.
[+] [-] WastingMyTime89|3 years ago|reply
I’m really glade that Meson and Ninja exist however. I hate GNU Make like few things in my toolbox. M4 really is an awful language.
[+] [-] londons_explore|3 years ago|reply
Specifically, Bazel is really at home with Java/C++ and Linux. Sure, it kinda works elsewhere, but you should be considering other options.
[+] [-] strikelaserclaw|3 years ago|reply
[+] [-] jsw|3 years ago|reply
We very happily runny a polyglot monorepo w/ 5+ languages, multiple architectures, with fully reproducible artifacts and deployment manifests, all deployed to almost all AWS regions on every build. We update tens of thousands of resources in every environment for every build. The fact that Bazel is creating reproducible artifacts allow us to manage this seamlessly and reliably. Clean builds take an hour+, but our GH self-hosted runners often complete commit to green build for our devs in less than a minute.
The core concept of Bazel is very simple: explicitly declare the input you pass to a rule/tool and explicitly declare the output it creates. If that can click, you're half way there.
[+] [-] bobsomers|3 years ago|reply
Yes, and it's very important to note that Bazel does nothing to solve the problem about having a reproducible and hermetic runtime. Even if you think you aren't linking against anything dynamically, you are probably linking against several system libraries which must be present in the same versions to get reproducible and hermetic behavior.
This is solvable with Docker or exceptionally arcane Linux hackery, but it's completely missing from the Bazel messaging and it often leaves people thinking it provides more than it really does.
[+] [-] jeffbee|3 years ago|reply
[+] [-] matnosner|3 years ago|reply
For this reason we are building a Bazel competitor with a less restrictive build environment which can also be applied to projects with lower than 1M line of code.
https://bob.build
[+] [-] unknown|3 years ago|reply
[deleted]
[+] [-] dietrichepp|3 years ago|reply
For retro game projects, the core of your game might be written in C or C++, which you want cross-compiled. That's easily within reach of stuff like makefiles. But then I start adding a bunch of custom tooling--I want to write tools that process sprites, audio, or 3D models. These days I tend to write those tools in Go. I'm also working with other people, and I want to be able to cross-compile these tools for Windows, even though I'm developing on Linux or macOS.
My Bazel repository will download the ARM toolchain automatically and do everything it needs to build the target. I don't really need to set up my development environment at all--I just need to install Bazel and a C compiler, and Bazel will handle the rest. I don't need to install anything else on my system, and the C compiler is only needed because I'm using protocol buffers (Bazel downloads and compiles protoc automatically).
[+] [-] synergy20|3 years ago|reply
[+] [-] robbintt|3 years ago|reply
[+] [-] adamgordonbell|3 years ago|reply
This somewhat long article hopes to answer questions about Bazel for future people in a similar position to me. If you know a lot of Bazel then you might not learn much but if you’ve vaguely heard of it and are not sure when its the tool that should be reached for I’m hoping this will help.
[+] [-] lostdog|3 years ago|reply
I'm also surprised at how little experience everyone had with bazel alternatives. I wonder if buck or pants is easier to work with.
[+] [-] jxramos|3 years ago|reply
[+] [-] miiiiiike|3 years ago|reply
What documentation exists is flawed and what isn’t flawed has massive holes. It was so bad that I had to ask a few Google employees if there was secret internal documentation somewhere. There isn’t, and, they almost all hated using it as well.
I went back to Make and had the whole repo building in an afternoon.
Bazel is a great idea but like most other Google OSS projects it doesn’t have strong enough documentation to form a community or enough of a community to create good docs.
Even if I brute forced my way into using Bazel, I couldn’t ask an employee to learn it.
I hear that Blaze is much easier to work with.
[+] [-] unknown|3 years ago|reply
[deleted]
[+] [-] an_d_rew|3 years ago|reply
We use Bazel’s rules_docker as well, and I would caution someone evaluating it with a note from out experience.
What Bazel does well (and as well as Bazel fits your use-case) Bazel does extremely well and is a reproducible joy to use.
But if you stray off that path even a tiny bit, you’re often in for a surprisingly inexplicable, unavoidable, far-reaching pain.
For example, rules_docker is amazing at laying down files in a known base image. Everything is timestamped to the 1970 unix epoch, for reproducibility, but hey, it’s a bit-perfect reproduction.
Need to run one teensy executable to do even the smallest thing that’s trivial with a Dockerfile? Bazel mist create the image transfer it to a docker daemon, run the command, transfer it back... your 1 kb change just took 5 minutes and 36 gb of data being tarred, gzipped, and flung around (hopefully-the-local) network.
It may not be a dealbreaker, and you may not care, but be forewarned that these little surprises creep up fairly ofen from unexpected quarters!
Edit: after 2-ish years of Bazel, I would say that for 99% of developers and organizations, the most likely answer is "never".
[+] [-] eckesicle|3 years ago|reply
It's so so much worse than the homegrown software it replaced.
Hermetic sounds nice in theory, but doesn't actually matter and comes with a massive cost.
1. Bazel is slow. Like really fucking slow compared to your native build tools. Startup time can be insane if you use multiple rules (languages) in your repo. There's a great feeling when you get to work in the morning to build your go project but you have to install the latest python, nodejs, and ruby toolchains because someone updated them on master. The cache works well, except with a 1000 devs something will always be invalidated on master.
2. The documentation sucks. It's written to explain concepts with how things work, with no examples of how to actually complete tasks you care about. Of course that wouldn't help either because the bazel setup you're using is heavily customised.
3. Everyone on your team now has to learn yet another DSL, and a bunch of commands to run. I would easily spend 5 hours per week either waiting for, or debugging some issue in bazel.
All this for what benefit? Everyone's on the same version of a dependency? Not even sure this is a desirable property.
Also the monorepo is slow as jelly to work with and many tools or editors struggle to open it. Good luck getting code completion to work well too.
It's one of those ideas that are nice in theory, but awful in practice. It's possible that with enough manpower you may be able to use it effectively, but we certainly were not.
[+] [-] badoongi|3 years ago|reply
Bazel takes on dependency management, which is probably an improvement for a C++ codebase where there is no de-facto package manager. For modern languages like golang where a package manager is widely adopted by the community it's usually just a pain. e.g Bazel's offering for golang relies on generating "Bazel configurations" for the repositories to fetch, this alternative definition of dependencies is not what all the existing go tooling are expecting, and so to get the dev tooling working properly you end up generating one configuration from the other having 2 sources of truth, and pains when there's somehow a mismatch.
Bazel hermeticity is very nice in theory, in practice many of the existing toolchains used by companies that are using Bazel are non-hermetic, resulting in many companies stuck in the process of "migration to Bazel remote execution" forever.
Blaze works well in Google's monorepo where all the dependencies are checked in (vendored), the WORKSPACE file was an afterthought when it was opensourced, and the whole process of fetching remote dependencies in practice becomes a pain for big monorepos (I just want to build this small golang utility, `bazel build //simple:simple` and you end up waiting for a whole bunch of python dependencies you don't need to be downloaded).
And this is all before talking about Javascript, if your JS codebase wasn't originally designed the way Bazel expects it you're probably up for some fun.
[+] [-] raphlinus|3 years ago|reply
Conversely, as many comments here observe, it is terrible as a small build system. There, you want to be able to get started quickly and pull in dependencies without thinking too hard. A simple but easy to understand approach (even one based on make) might work. This is just a different problem than what Bazel solves.
My personal opinion (and I should emphasize, I am absolutely not speaking for anyone here, least of all Google) is that the best way forward is to take the ideas of Bazel (hermetic and deterministic builds) and package them as a good small build system, perhaps even compatible with Bazel so you don't have to rewrite build rules all the time. I also think compilers and tools can and should evolve to become good citizens in such a world. But I have no idea how things will go, it's equally plausible the entire space of build systems will continue to suck as they have for decades.
[1]: http://neilmitchell.blogspot.com/2021/09/small-project-build...
[2]: https://neilmitchell.blogspot.com/2021/09/reflecting-on-shak...
[3]: https://neilmitchell.blogspot.com/2021/09/huge-project-build...
[+] [-] bobsomers|3 years ago|reply
It's on my list of things that I will inevitably never get to.
[+] [-] pbiswal|3 years ago|reply
How does https://please.build measure up?
[+] [-] EdwardDiego|3 years ago|reply
[+] [-] mplewis9z|3 years ago|reply
It's great at being able to offload work to a remote server, but in my opinion that should never be the only way you can get work done. Local development should always be the default, with remote execution being an _option_ when available.
[+] [-] bmohlenhoff|3 years ago|reply
The existing codebase I was working with did not lay out its dependencies and files in the manner expected by Bazel so dealing with dependency/include hell was frustrating.
Then there was a large portion of the project that depended on generated code for its interfaces (ie. similar to protobuf but slightly different) and trying to dive into the Bazel rules and toolchain with no other Bazel experience was not fun. I attempted to build off of the protobuf implementation but kept finding additional layers to the onion that didn't exactly translate to the non protobuf tooling. The project documentation seemed out of date in this area (ie. major differences in the rules engine between 1.0 and subsequent versions) and I couldn't find many examples to look at other than overly simplified toy examples.
All in all a frustrating experience. I could not even get far enough along to compile the interfaces for the project.
[+] [-] sa46|3 years ago|reply
The good:
- Amazing for Go backends. I can push a reproducible Docker image to Kubernetes with a single Bazel command. I can run our entire backend with a single command that will work on any developer's box.
- Amazing for testing. All of our backends tests use a fully independent Postgres database managed by Bazel. It's really nice not having to worry about shared state in databases across tests.
- We can skip Docker on macOS for development which provides on the order of a 10x speedup for tests.
- BuildBuddy provides a really nice CI experience with remote execution. Bazel tests are structured so I can see exactly which tests failed without trawling through thousands of lines of log output. I've heard good things about EngFlow but BuildBuddy was free to start.
- Really nice for schema driven codegen like protobufs.
The bad:
- Bazel is much too hard for TypeScript and JavaScript. We don't use Bazel for our frontend. New bundlers like Vite are much faster and have a developer experience that's hard to replicate with Bazel. Aspect.dev is doing some work on this front. One large hurdle is there's not automatic BUILD file dependency updater like Gazelle for Go.
- Windows support is too hard largely because most third party dependencies don't work well with Windows.
- Third party dependencies are still painful. There's ongoing work with bzlmod but my impression is that it won't be usable for a couple of years.
- Getting started was incredibly painful. However, the ongoing maintenance burden is a few hours per month.
[+] [-] lhorie|3 years ago|reply
I built this[0] for the Bazel + yarn setup that we use at Uber. We currently manage a 1000+ package monorepo with it.
[0] https://github.com/uber-web/jazelle
[+] [-] badoongi|3 years ago|reply
Curious wouldn't `go run` give you the same? pure go code is supposed to be portable, unless you have cgo deps I guess?
> I can push a reproducible Docker image to Kubernetes with a single Bazel command.
That's definitely an upside over what would otherwise would probably default to a combination of Dockerfiles and scripts/Makefiles, does it worth bringing in the massive thing that is Bazel? depends I guess.
I'm curious: would you say your experience with golang IDEs / gopls is degraded? did you do anything special to make it good? I often feel like development is more clunky and often I just give up on the nice-to-haves of a language server e.g often some dependencies in the IDE aren't properly indexed, I can probably get Bazel to do some fetching, reindex and get it working, but it will take 3-4 minutes and I just often choose to live with the thing appearing as "broken" in the IDE and getting less IDE features.
[+] [-] jesseschalken|3 years ago|reply
Integration testing is a breeze through data dependencies. The reproducibility guarantees means we can reference container image SHAs in our Terraform and if the image didn't change the deploy is a no-op.
Bazel is an outstanding build system that handily solves a lot of practical problems in software engineering. Not just "at scale". Practical problems at any scale.
[+] [-] EdSchouten|3 years ago|reply
https://github.com/bazelbuild/remote-apis
In my opinion the protocol is fairly well designed.
[+] [-] oftenwrong|3 years ago|reply
https://sluongng.hashnode.dev/
Tools like Earthly (and other BuildKit HLB frontend languages) will help more teams get some of the main benefits of Bazel with a lower bar of complexity. If you already know how to write Dockerfiles and Makefiles, you can write Earthfiles without much additional learning necessary. It provides a build that is repeatable, incremental, self-contained, never dirty, shared-cacheable, et cetera.
[+] [-] cstrahan|3 years ago|reply
See https://github.com/bazelbuild/rules_go/issues/1486
Not very confidence inspiring when Google’s build system falls over when you combine three technologies that are used commonly throughout Google’s code base (two of which were created by Google).
If you’re Google, sure, use Bazel. Otherwise, I wouldn’t recommend it. Google will cater to their needs and their needs only — putting the code out in the open means you get the privilege of sharing in their tech debt, and if something isn’t working, you can contribute your labor to them for free.
No thanks :)
[+] [-] vlovich123|3 years ago|reply
[+] [-] mattboardman|3 years ago|reply
[+] [-] dimator|3 years ago|reply
https://bazel.build/build/bzlmod
[+] [-] n0us|3 years ago|reply
[+] [-] lxe|3 years ago|reply
[+] [-] steeve|3 years ago|reply
If possible, start fresh.
I don’t see myself working without it from now on.
[+] [-] tdeck|3 years ago|reply
[+] [-] jxramos|3 years ago|reply
Bazel is not just a build toolchain it is a test toolchain as well. The problems it solves with tests with respect to running only relevant & applicable tests to the changes being pushed is what many people have dreamed about before or came up with some crude solution to approximate the dependencies. If my test did not print out `(cached)` I know something in the dependency graph was disturbed in some way that may be completely not obvious to me, especially in a gigantic repo.
Bazel query is also worth mentioning to visualize dependencies with its xml output and which can also enforce architectural barriers asserting empty `somepath` queries between two targets that should not meet in the middle somewhere.
It would be neat one day if bazel integrates in some semantic logical level of dependency checking beyond the physical but that may be too expensive of an operation.