(no title)
yata69420 | 3 years ago
Once you have bazel, you can distribute the workload so that you could have thousands of machines each producing artifacts to be shared with other build machines.
Then you can set up your dev machine to rely on those caches, so your local builds either use everything directly from cache or instruct a remote builder to produce the artifact for you.
No matter what you change, because the dependencies are graphed precisely, you only need to rebuild a very tiny set of artifacts impacted by your change.
Of course, this doesn't actually work in practice.
Your builds probably aren't deterministic, so your graph of artifacts won't be either, causing lots of stuff to rebuild. Also, it may work great for something like Java that produces class files, but not provide any caching at all for ruby.
Debugging and supporting it is a full time job for a team of engineers that dramatically outweighs the cost of keeping your projects sensibly sized.
You might think that as the project gets larger, it's totally worth it to use bazel for that sweet caching. But in reality the graph construction and querying will become so bloated that just figuring out which targets need to rebuilt becomes a full time engineering effort that breaks constantly with all tooling upgrades.
Also, the plugin ecosystem is just poor.
Bazel is the perfect storm of computer scientists loving big graphs, Google exporting an open source project and then rebuilding it internally, and inexperienced engineers being sold on a tech as being obviously right because all the big players use it.
bobsomers|3 years ago
But if you actually have to get something done for your business to exist, it's a lot of unrelated work to keep the beast fed and happy.