top | item 35066501

(no title)

lewo | 3 years ago

> The key factor behind our decision was the realization that while Docker images are industry standard, moving around 100s of megabytes of images seems unnecessarily heavy-handed when we just need to synchronize a small change.

I think the culprit is more the GitHub Actions cache than Docker since it seems to be hard to get a clean cache management. I'm not sure about caching Docker image layers, but caching the Nix store with GitHub Actions is pretty complicated (not even sure it's possible): this means we have to download all required Nix store paths on each run, but i consider this is because of a GitHub Action cache limitation.

So, did you consider using another CI, which offers better caching mechanisms?

With a CI able to preserve the Nix store (Hydra[1] or Hercules[2] for instance), I think nix2container (author here) could also fit almost all of your requirements ("composability", reproducibility, isolation) and maybe provide better performances because it is able to split your application into several layers [2][3].

Note i'm pretty sure a lot of Docker CI also allows to efficiently build Docker images.

[1] https://hercules-ci.com/

[2] https://grahamc.com/blog/nix-and-layered-docker-images

[3] https://github.com/nlewo/nix2container/blob/85670cab354f7df6...

discuss

order

FBISurveillance|3 years ago

There's been a recent Launch HN of Depot.dev [1] - I've integrated it quickly into my GitHub Actions workflow and it's blazingly fast (13x speedups for me). It also was a drop-in replacement since I was using Docker Bake and Docker Action and Depot mimics that almost fully (except SBOM and provenance bits). It also works with Google Cloud Workload Identity Federation so image pushes to Artifact Registry didn't need any tweaking.

[1] https://news.ycombinator.com/item?id=34898253

Disclaimer: not affiliated, a happy paying customer.

shalabhc|3 years ago

Thanks for the interesting links - I'll check them out! We would need not just another CI but also another container platform because launching a docker container is also slow.

Irrespective of the CI, I believe all cached Docker layers will need to be downloaded onto the build machine before it can be rebuilt.

Still, I believe it is possible to build and deploy faster even with a "docker image only" design and it's something we are still looking at. The question is what is the lower bound here - would be hard to beat "sync a file to a warm container and run it". Pex gives us a pretty good lower bound that is also container platform agnostic.

robertlagrant|3 years ago

> I believe all cached Docker layers will need to be downloaded onto the build machine before it can be rebuilt

Docker making some sort of layer-sharing mechanism that constantly distributes layers to all build runners would be worth some cash, I reckon.

lewo|3 years ago

> would be hard to beat "sync a file to a warm container and run it"

It depends on the size of your Pex file (i don't think you mentioned it and sorry if i missed the info). With a Docker/OCI image, it would be possible to create a layer with only the Python files of your application (without deps and interpreter). (I say "would be possible" because that's currently not easy to achieve with Nix for instance.)