top | item 39956327

Cache is King: A guide for Docker layer caching in GitHub Actions

210 points| adityamaru | 2 years ago |blacksmith.sh | reply

109 comments

order
[+] solatic|2 years ago|reply
As someone who spent way too much time chasing this rabbit, the real answer is Just Don't. GitHub Actions is a CI system that makes it easy to get started with simple CI needs but runs into hard problems as soon as you have more advanced needs. Docker caching is one of those advanced needs. If you have non-trivial Docker builds then you simply need on-disk local caching, period.

Either use Depot or switch to self-hosted runners with large disks.

[+] Arbortheus|2 years ago|reply
GitHub really need to invest in their CI. It is a second-class feature in the platform, but should be the beating heart of every SaaS team.

GitLab CI is leaps and bounds ahead.

[+] matsemann|2 years ago|reply
> as soon as you have more advanced needs

If there's one thing I've learned over the years, is that we really seldom have advanced needs. Mostly we just want things to work a certain way, and will fight systems to make it behave so. It's easier to just leave it be. Like maven vs gradle; yes, gradle can do everything, but if you need that it's worth taking a step back and assess why the normal maven flow won't work. What's so special with our app compared to the millions working just fine out of the box?

[+] kylegalbraith|2 years ago|reply
Thanks for the shout-out regarding Depot, really appreciate it. We came to the same conclusion regarding Docker layer cache and thus why we created Depot in the first place. The limitations and performance surrounding GitHub Actions cache leaves a lot to be desired.
[+] crote|2 years ago|reply
> GitHub Actions is a CI system that makes it easy to get started

It's not even that! Coming from GitLab I was quite surprised at how poor the "getting started" experience was. Rather than a simple "on push, run command X" you first have to do a deep dive into actions/events/workflows/jobs/runs, and then figure out what kind of weird tooling is used for trivial things like checking out your code, or storing artifacts.

And then you try to unify your pipeline across several projects because that's what Github is heavily promoting with the whole "uses: actions/checkout" reuse thing - but it turns out to be a huge hassle to get it working because nothing works the way you'd expect it to work.

In the end I did get GHA to do what I was already doing in GitLAb, but it took me ten times as long as it did originally setting it up. I believe GHA is flexible and powerful enough to be well-suited for medium-sized companies, but it's neither easy enough for small companies, nor powerful enough for large companies. It's one of the few Github features I genuinely dislike using.

[+] cqqxo4zV46cp|2 years ago|reply
I got it working, with intermediate layers, too. All to find that I didn’t see that material a performance benefit after taking into account how long it takes to pull from and push to the cache.
[+] mhitza|2 years ago|reply
On one project that was a bit more involved, I pulled the latest image I've built from the registry before starting the build. That worked well enough for caching in my case.
[+] adityamaru|2 years ago|reply
totally agree, github actions has done an excellent job at this lowest layer of the build pipeline today but is woefully inadequate the minute your org hits north of 50 engineers
[+] candiddevmike|2 years ago|reply
Additionally, docker build refuses to cause any side effects to the host system. This makes any kind of caching difficult by design. IMO, if possible, consider doing your build outside of docker and just copying it into a scratch container...
[+] bushbaba|2 years ago|reply
Can’t you use s3 + mountpoint for most distributed CI cache needs?
[+] lispisok|1 year ago|reply
I use self-hosted runners. It wasnt even because we could have large disk for caching. Github pricing for their runners is so bad it was a no brainer to host our own.
[+] ithkuil|2 years ago|reply
Huge shout-out to depot! It works really well!
[+] cpfohl|2 years ago|reply
This is wild. I've spent the last three weeks working on this stuff for two separate clients.

Important note if you're taking advice: cache-from and cache-to both accept multiple values. Cache to just ouputs the cache data to all the ones specified. cache-from looks for cache hits in the sources in-order. You can do some clever stuff to maximize cache hits with the least amount of downloading using the right combination.

[+] boronine|2 years ago|reply
I've spent days trying all of these solution at my company. All of these solutions suck, they are slow and only successful builds get their layers cached. This is a dead end. The only workable solution is to have a self-hosted runner with a big disk.
[+] kylegalbraith|2 years ago|reply
This is definitely a direction to try. But if its faster Docker image builds and a layer caching system that actually works, you should definitely try out Depot. We automatically persist layer cache to persistent NVMe devices and orchestrate that to be immediately available across builds.
[+] teaearlgraycold|2 years ago|reply
I use namespace’s action runners for this (just a customer, not affiliated in any way). They’re a company with a pretty good product stack. Although the web UI is annoyingly barebones.
[+] mysza|1 year ago|reply
Can you share example of github actions? When i use docker/setup-buildx-action and local runner i can't make it use the cache. I think it's the "docker-container" runner's fault
[+] aayushshah15|2 years ago|reply
how do you ensure isolation between runs on a self hosted runner that way?
[+] mshekow|2 years ago|reply
I took a detailed look at Docker's caching mechanism (actually: BuildKit) in this article https://www.augmentedmind.de/2023/11/19/advanced-buildkit-ca...

There I also explain that IF you use a registry cache import/export, you should use the same registry to which you are also pushing your actual image, and use the "image-manifest=true" option (especially if you are targeting GHCR - on DockerHub "image-manifest=true" would not be necessary).

[+] daulis|1 year ago|reply
After years of lurking, I made an account to reply to this

"image-manifest=true" was the magic parameter that I needed to make this work with a non-DockerHub registry (Artifactory). I spent a lot of time fighting this, and non-obvious error messages. Thank you!!

We use a multi-stage build for a DevContainer environment, and the final image is quite large (for various reasons), so a better caching strategy really helps in our use case (smaller incremental image updates, smaller downloads for developers, less storage in the repository, etc)

[+] remram|2 years ago|reply
Thanks, this is a very thorough explanation.

Is there really no way to cache the 'cachemount' directories?

[+] tkiolp4|2 years ago|reply
Docker has been among us for years. Why isn’t efficient caching already implemented out of the box? It’s a leaking abstraction that users have to deal with. Annoying at best.
[+] omeid2|2 years ago|reply
Efficient caching exists when caching makes sense, layers are meaningfully cached.

What most people need but don't use is base layers that are upstream of their code repo and released regularly, not at each commit.

Containerisation has made reproducible environments so easy that people want to reproduce it at each CI run, a bit too much.

[+] jpgvm|2 years ago|reply
The trick to Docker (well OCI) images is never under any circumstance use `docker build` or anything based on it. Dockerfile is your enemy.

Use tools like Bazel + rules_oci or Gradle + jib and never spend time thinking about image builds taking time at all.

[+] Arbortheus|2 years ago|reply
Please no! Do not use Bazel unless you have a platform team with multiple people who know how to use it - e.g. large Google-like teams.

We had “the Bazel guy” in our mid-sized company that Bazelified so many build processes, then left.

It has been an absolute nightmare to maintain because no normal person has any experience with this tooling. It’s very esoteric. People in our company have reluctantly had to pick up Bazel tech debt tasks, like how the rules_docker package got randomly deprecated and replaced with rules_oci with a different API, which meant we could no longer update our Golang services to new versions of Go.

In the process we’ve broken CI, builds on Mac, had production outages, and all kinds of peculiarities and rollbacks needed that have been introduced because of an over-engineered esoteric build system that no one really cares about or wanted.

[+] user-|2 years ago|reply
I feel like ive seen so many new companies just providing cheaper github actions.
[+] ValtteriL|2 years ago|reply
Docker layer caching is one of the reasons I moved to Jenkins 2 years ago and have been very happy with it for the most part.

I only need to install utils once and all build time goes to building my software. It even integrates nicely with Github. Result: 50% faster feedback.

However, it needs a bit initial housekeeping and discipline to use correctly. For example using Jenkinsfiles is a must and using containers as agents is desirable.

[+] notnmeyer|2 years ago|reply
this is pretty neat—it’s been a while since i’ve tried caching layers with gha. it used to be quite frustrating.

my previous experience was that in nearly all situations the time spent sending and retrieving cache layers over the network wound up making a shorter build step moot. ultimately we said “fuck it” and focused on making builds faster without (docker layer) caching.

[+] aayushshah15|2 years ago|reply
(blacksmith co-founder here)

it's unfortunate the amount of expertise / tinkering required to get "incrementalism" in docker builds in github actions. we're hoping to solve this with some of the stuff we have in the pipeline in the near future.

[+] damianh|2 years ago|reply
The fact that GitHub don't provide a better solution here has to be actually costing them money with the network usage and extra agent time consumed. Right?
[+] spurin|1 year ago|reply
It would be possible to offload the caching to Docker Build Cloud transparently, it’s part of the Docker subscription service, every account gets free minutes - 50 free minutes a month so depending on usage, you may be able to get this at zero cost.

With this approach, you’d use buildx and remotely, they would manage and maintain cache amongst other benefits.

It does require a credit card signup (which takes $0 to mitigate fraud). Full transparency, I’m a Docker Captain and helped test this whilst it was called Hydrobuild.

[+] manx|2 years ago|reply
Earthly solves this really well: https://earthly.dev

They rethink Dockerfiles with really good caching support.

[+] oftenwrong|2 years ago|reply
The caching support is mostly the same. Both Earthly and Dockerfile are BuildKit frontends. BuildKit provides the caching mechanisms.

A possible exception is the "auto skip" feature for Earthly Cloud, since I do not know how that is implemented.

[+] glenjamin|2 years ago|reply
Docker layer caching is complicated!

CircleCI has an implementation that used to use a detachable disk, but that had issues with concurrency

It’s since been replaced with an approach that uses a docker plugin under the hood to store layers in object storage

https://circleci.com/docs/docker-layer-caching/

[+] remram|2 years ago|reply
Is it any better than buildx cache then, that also stores caches in object storage (via OCI registry)?
[+] SJC_Hacker|1 year ago|reply
Smaller images are also another way to go. At my last company, image sizes were like 2-3Gb. I was able to prune that down to ~1.5 GB. Boost and a custom clang/llvm build were particular major offenders here.

There's quite a bit of cruft that can be pruned.

[+] tanepiper|2 years ago|reply
I have this set up in our pipeline, we also build the image early and use assets to move it between jobs. We've also just switched to self-hosted runners, so might look into shared disk.

But in the long run, as annoying as it is out build pipelines reduced but quite a few minutes per build.

[+] adityajp|2 years ago|reply
(Co-founder of Blacksmith here)

Glad it worked really well for you.

What made you switch to self-hosted runners?

[+] ikekkdcjkfke|2 years ago|reply
Anybody have a build system that builds as fast or faster than locally?
[+] maxmcd|2 years ago|reply
I weep for this period of time where we don't have sticky disks readily available for builds. Uploading the layer cache each time is such a coarse and time-consuming way to cache things.

Maybe building from scratch all the time is a good correctness decision? Maybe stale values in disks is a tricky enough issue to want to avoid entirely?

If you keep a stack of disks around and grab a free one when the job starts you'd end up with good speedup a lot of the time. If cost is an issue you can expire them quickly. I regularly see CI jobs spending >50% of their time downloading the same things, or compiling the same things, over and over. How many times have I triggered an action that compiled the exact same sqlite source code? Tens of thousands?

Maybe this is fine, I dunno.

[+] Cloudef|2 years ago|reply

    - uses: DeterminateSystems/nix-installer-action@main
    - uses: DeterminateSystems/magic-nix-cache-action@main
[+] bagels|2 years ago|reply
I think that bitbucket offers this out of the box on their CI product (pipelines)
[+] dboreham|2 years ago|reply
No!

So much time spent debunking such broken "caching" solutions.

Computers are very fast now. Use proper package/versioning systems (part of the problem here is that those are often also broken/badly designed).

[+] aayushshah15|2 years ago|reply
This is simply false. For starters, GitHub actions by default run on Intel Haswell chips from 2014 (in some cases). Secondly, hardware being faster doesn't obviate the need for caching, especially for docker builds where your layer pulls are purely network bound.
[+] kbolino|2 years ago|reply
"Computers are very fast now" is largely because of caching. The CPU has a cache, the disk drive has a cache, the OS has a cache, the HTTP client has a cache, the CDN serving the content has a cache, etc. There may be better ways to cache than at the level of Docker image layers, but no caching is the same as a cache miss on every request, which can be dozens, hundreds, or even thousands of times slower than a cache hit.