Anyone have recommendations for an image cache? Native kubernetes a plus.
What would be really nice is a system with mutating admission webhooks for pods which kicks off a job to mirror the image to a local registry and then replaces the image reference with the mirrored location.
We do a local (well, internal) mirror for "all" these things. So, we're basically never stuck. It mirrors our CPAN, NPM, Composer, Docker and other of these web-repos. Helps on the CI tooling as well.
Not Google Artifact Registry... Our Docker Hub pull-through mirror went down with the Docker Hub outage. Images were still there but all image tags were gone
I've been using https://github.com/enix/kube-image-keeper on some of my clusters - it is a local docker registry running on cluster, with a proxy and mutation webhooks. I also evaluated spegel, but currently it isn't possible to setup on GKE
CNCF has harbor [0], which I use at home and have deployed in a few clusters at work, and it works well as a pull through cache. In /etc/containers/registries.conf it's just another line below any registry you want mirrored.
Where hub is the name of the proxy you configured for, in this case, docker.io. It's not quite what you're asking for but it can definitely be transparent to users. I think the bonus is that if you look at a podspec it's obvious where the image originates and you can pull it yourself on your machine, versus if you've mutated the podspec, you have to rely on convention.
Depending on what other (additional) features you're willing to accept, the GoHarbor[0] registry supports pull-through as well as mirroring and other features, it's a nice registry that also supports other OCI stuff like Helm charts, and does vulnerability scanning with "Interrogation Services" like Trivy.
I've been using it at home and work for a few years now, might be a bit overkill if you just want a simple registry, but is a really nice tool for anyone who can benefit from the other features.
Basically it's a k3s configured to use a local mirror and that local mirror is running the Zot registry (https://zotregistry.dev/v2.1.8/). It is configured to automatically expired old images so my local hard drive isn't filled up).
I’ll admit I haven’t checked before posting, perhaps an admin can merge both submissions and change the URL on the one you linked to the one in this submission.
Github actions buildx also going down is a really unintended consequence. It would be great if we could mirror away from docker entirely at this point but I digress.
I didn't even really realize it was a SPOF in my deploy chain. I figured at least most of it would be cached locally. Nope, can't deploy.
I don't work on mission-critical software (nor do I have anyone to answer to) so it's not the end of the world, but has me wondering what my alternate deployment routes are. Is there a mirror registry with all the same basic images? (node/alpine)
I suppose the fact that I didn't notice before says wonderful things about its reliability.
I guess the best way would be to have a self-hosted pull-through registry with a cache. This way you'd have all required images ready even when dockerhub is offline.
Unfortunately that does not help in an outage because you cannot fill the cache now.
> wondering what my alternate deployment routes are
If the stakes are low and you don't have any specific need for a persistent registry then you could skip it entirely and push images to production from wherever they are built.
This could be as simple as `docker save`/`scp`/`docker load`, or as fancy as running an ephemeral registry to get layer caching like you have with `docker push`/`docker pull`[1].
It's a bit stupid that I can't restart (on Coolify) my container, because pulling the image fails, even though I am already running it, so I do have the image, I just need to restart the Node.js process...
We chose to move to GitLab's container registry for all the images we use. It's pretty easy to do and I'm glad we did. We used to only use it for our own builds.
The package registry is also nice. I only wish they would get out of the "experimental" status for apt mirror support.
I was hoping google cloud artifact registry pull-thru caching would help. Alas, it does not.
I can see an image tag available in the cache in my project on cloud.google.com, but after attempting to pull from the cache (and failing) the image is deleted from GAR :(
> "When a pull is attempted with a tag, the Registry checks the remote to ensure if it has the latest version of the requested content. Otherwise, it fetches and caches the latest content."
So if the authentication service is down, it might also affect the caching service.
In our ci setting up the docker buildx driver to use the artifact registry pull through cache involves (apparently) an auth transaction to dockerhub which fails out
The images I use the most, we pull and push to our own internal registry, so we have full control.
There are still some we pull from Docker Hub, especially in the build process of our own images.
To work around that, on AWS, you can prefix the image with public.ecr.aws/docker/library/ for example public.ecr.aws/docker/library/python:3.12 and it will pull from AWS's mirror of Docker Hub.
Someone mentioned Artifactory; but its honestly not needed. I would very highly recommend an architecture where you build everything into a docker image and push it to an internal container registry (like ecr; all public clouds have one) for all production deployments. This way, outages only affect your build/deploy pipeline.
You pull the images you want to use, preferably with some automated process, then push them to your own repo. And anyways use your own repo when pulling for dev/production. It saves you from images disappearing as well.
Hard to see if this is /s or not. Nobody is forcing you to run images straight from dockerhub lol. Every host keeps the images already on it. Running a in-house registry is also a good idea.
[+] [-] c0balt|5 months ago|reply
[+] [-] __turbobrew__|5 months ago|reply
What would be really nice is a system with mutating admission webhooks for pods which kicks off a job to mirror the image to a local registry and then replaces the image reference with the mirrored location.
[+] [-] edoceo|5 months ago|reply
[+] [-] da768|5 months ago|reply
[+] [-] NickHirras|5 months ago|reply
https://gallery.ecr.aws/
[+] [-] issei|5 months ago|reply
[+] [-] andrewstuart2|5 months ago|reply
[0] https://goharbor.io/
[+] [-] alias_neo|5 months ago|reply
I've been using it at home and work for a few years now, might be a bit overkill if you just want a simple registry, but is a really nice tool for anyone who can benefit from the other features.
[0] https://goharbor.io/
[+] [-] philipallstar|5 months ago|reply
[+] [-] vitaliyf|5 months ago|reply
[+] [-] lars_francke|5 months ago|reply
Basically it's a k3s configured to use a local mirror and that local mirror is running the Zot registry (https://zotregistry.dev/v2.1.8/). It is configured to automatically expired old images so my local hard drive isn't filled up).
[+] [-] tfolbrecht|5 months ago|reply
[+] [-] VonGuard|5 months ago|reply
[+] [-] gnabgib|5 months ago|reply
[+] [-] cipherself|5 months ago|reply
[+] [-] gvkhna|5 months ago|reply
[+] [-] tfolbrecht|5 months ago|reply
https://hub.docker.com/_/registry
Your git provider probably also has a container registry service built in.
[+] [-] switz|5 months ago|reply
I don't work on mission-critical software (nor do I have anyone to answer to) so it's not the end of the world, but has me wondering what my alternate deployment routes are. Is there a mirror registry with all the same basic images? (node/alpine)
I suppose the fact that I didn't notice before says wonderful things about its reliability.
[+] [-] tom1337|5 months ago|reply
Unfortunately that does not help in an outage because you cannot fill the cache now.
[+] [-] kam|5 months ago|reply
https://gallery.ecr.aws/
[+] [-] matt_kantor|5 months ago|reply
> wondering what my alternate deployment routes are
If the stakes are low and you don't have any specific need for a persistent registry then you could skip it entirely and push images to production from wherever they are built.
This could be as simple as `docker save`/`scp`/`docker load`, or as fancy as running an ephemeral registry to get layer caching like you have with `docker push`/`docker pull`[1].
[1]: https://stackoverflow.com/a/79758446/3625
[+] [-] XCSme|5 months ago|reply
[+] [-] unknown|5 months ago|reply
[deleted]
[+] [-] sublinear|5 months ago|reply
We chose to move to GitLab's container registry for all the images we use. It's pretty easy to do and I'm glad we did. We used to only use it for our own builds.
The package registry is also nice. I only wish they would get out of the "experimental" status for apt mirror support.
[+] [-] s_ting765|5 months ago|reply
[+] [-] rickette|5 months ago|reply
[+] [-] miller_joe|5 months ago|reply
I can see an image tag available in the cache in my project on cloud.google.com, but after attempting to pull from the cache (and failing) the image is deleted from GAR :(
[+] [-] qianli_cs|5 months ago|reply
> "When a pull is attempted with a tag, the Registry checks the remote to ensure if it has the latest version of the requested content. Otherwise, it fetches and caches the latest content."
So if the authentication service is down, it might also affect the caching service.
[+] [-] rshep|5 months ago|reply
[+] [-] breatheoften|5 months ago|reply
[+] [-] wolttam|5 months ago|reply
[+] [-] lambda|5 months ago|reply
[+] [-] esafak|5 months ago|reply
[+] [-] lambda|5 months ago|reply
There are still some we pull from Docker Hub, especially in the build process of our own images.
To work around that, on AWS, you can prefix the image with public.ecr.aws/docker/library/ for example public.ecr.aws/docker/library/python:3.12 and it will pull from AWS's mirror of Docker Hub.
[+] [-] pm90|5 months ago|reply
[+] [-] viraptor|5 months ago|reply
[+] [-] holysoles|5 months ago|reply
[+] [-] manasdas|5 months ago|reply
[+] [-] taberiand|5 months ago|reply
[+] [-] unknown|5 months ago|reply
[deleted]
[+] [-] XCSme|5 months ago|reply
Also, isn't it weird that it takes so long to fix given the magnitude of the issue? Already down for 3 hours.
[+] [-] unknown|5 months ago|reply
[deleted]
[+] [-] philip1209|5 months ago|reply
[+] [-] unknown|5 months ago|reply
[deleted]
[+] [-] MASNeo|5 months ago|reply
[+] [-] zenmac|5 months ago|reply
[+] [-] Too|5 months ago|reply
[+] [-] unknown|5 months ago|reply
[deleted]