Show HN: Accelerated Docker builds on your local machine with Depot (YC W23)
The launch blog post: https://depot.dev/blog/local-builds
Depot is a hosted container build service - we run fully managed Intel and Arm remote build machines in AWS, with large instance sizes and SSD cache disks. The machines run BuildKit, the build engine that powers Docker, so generally anything you can `docker build`, you can also `depot build`.
Most people use Depot in CI, and you could also run `depot build` from your local machine as well. That would perform the build using the remote builder, with associated fast hardware and extra fast datacenter network speeds.
But then to download the container back to your local machine, BuildKit would transfer the entire container back for every build, including base image layers, since BuildKit wasn’t aware of what layers already existed on your device.
The new release fixes this! To make it work, we replaced the BuildKit `--load` by making the Depot CLI itself serve the Docker registry API on a local port, then asking Docker to pull the image from that localhost registry. The CLI in turn intercepts the requests for layers and fetches them directly using BuildKit’s content API.
This means Docker only asks for the layers it needs! This actually speeds up both local builds, where you only need to download changed layers, as well as CI where it can skip building an expensive tarball of the whole image every time!
We ran into one major obstacle when first testing: the machine running the Docker daemon might not be the same machine running the `depot build` command. Notably, CircleCI has a remote Docker daemon, where asking it to pull from localhost does not reach the CLI’s temporary registry.
For this, we built a "helper" container that the CLI launches to run the HTTP server portion of the temporary registry - since it’s launched as a container, it does run on the same machine as the Docker daemon, and localhost is reachable. The Depot CLI then communicates with the helper container over stdio, receiving requests for layers and sending their contents back using a custom simple transport protocol.
This makes everything very efficient! One cool part about the remote build machines: you can share cache with anyone on your team who has access to the same project. This means that if your teammate already built all or part of the container, your build just reuses the result. This means that, in addition to using the fast remote builders instead of your local device, you can actually have cache hits on code you haven’t personally built yet.
We’d love for you to check it out, and are happy to answer any questions you have about technical details!
https://depot.dev/docs/guides/local-development
[+] [-] paulgb|2 years ago|reply
(btw, I always get suspicious when a Show HN post has a lot of praise in the comments, but I swear the Depot folks did not ask me to post anything and I only saw the post because I was checking HN)
[+] [-] aidos|2 years ago|reply
[+] [-] aidos|2 years ago|reply
Totally seamless integration and it solves a very real issue that I’ve had with docker caching across our environments. We tried with the docker s3 cache originally but it didn’t really work in practice. Depot is the answer.
When I ran into an issue last week, the guys had responded and scheduled a call within minutes.
Depot are a team I’m happy to back with a product I’m very happy to pay for.
[+] [-] 0xbadcafebee|2 years ago|reply
Making a simple container with a simple app is easy. The devil's in the details. What if you want to pull from a private registry using temporary credentials tied to a specific user, then use different temporary credentials during the build to pull packages from a different private package repository, persist the package cache across different container builds, then push the images to a different remote registry with more temporary credentials, with multi-stage builds, without capturing secrets in the container or persisting environment variables or files specific to the credentials?
Now what if you wanted to do all that in a random K8s pod?
Yes, of course there are ways to do this, I've pretty much done it all. But I've spent a huge amount of time to figure out each and every step. I've seen dozens of people take the same amount of time to figure it all out, often taking years to truly gather up all the knowledge. You know what would be great? If I didn't have to do that. If somebody just said "Here, Product X does everything you will ever want. The docs explain everything. Now you have 600 hours of your life back.", I would say Take. My. Money. I don't even necessarily need a product, if someone could just write down how to do it, so I don't have to bang my shins for days to get one thing to work.
Fast builds are nice because I can run more builds, but easier builds are nicer because more people can work on containers faster.
[+] [-] blowski|2 years ago|reply
[+] [-] FBISurveillance|2 years ago|reply
[+] [-] chen-anders|2 years ago|reply
[+] [-] poulpi|2 years ago|reply
Our docker builds are getting slow despite using kaniko, does depot has a better caching than kaniko?
How so?
[+] [-] jacobwg|2 years ago|reply
Both Kaniko and BuildKit can be run in rootless mode - we are not doing this, instead we give every builder access to an isolated VM, so builds are a bit quicker as well by avoiding some of the security tricks that rootless needs to work.
[+] [-] pugz|2 years ago|reply
My only complaint is more about GHA - I wish there was an easier way to build multiple unrelated images at the same time in a single GHA job. Running `depot build &` to background things is a bit fiddly when it comes to interleaved console output, exit codes, etc.
[+] [-] jmeyer2k|2 years ago|reply
[+] [-] kmcquade|2 years ago|reply
[+] [-] dsiddharth|2 years ago|reply
[+] [-] revskill|2 years ago|reply
[+] [-] re-thc|2 years ago|reply
[+] [-] neeh0|2 years ago|reply
[+] [-] earthling8118|2 years ago|reply
[+] [-] rubenfiszel|2 years ago|reply
[+] [-] arjun810|2 years ago|reply
[+] [-] davidjfelix|2 years ago|reply
[+] [-] NWMatherson|2 years ago|reply