top | item 43125089

Docker limits unauthenticated pulls to 10/HR/IP from Docker Hub, from March 1

424 points| todsacerdoti | 1 year ago |docs.docker.com

448 comments

order
[+] solatic|1 year ago|reply
Can't believe the sense of entitlement in this thread. I guess people think bandwidth grows on trees.

For residential usage, unless you're in an apartment tower where all your neighbors are software engineers and you're all behind a CGNAT, you can still do a pull here and there for learning and other hobbyist purposes, which for Docker is a marketing expense to encourage uptake in commercial settings.

If you're in an office, you have an employer, and you're using the registry for commercial purposes, you should be paying to help keep your dependencies running. If you don't expect your power plant to give you electricity for free, why would you expect a commercial company to give you containers for free?

[+] vineyardmike|1 year ago|reply
Docker is a company, sure, and they’re entitled to compensation for their services, sure. That said, bandwidth is actually really cheap. Especially at that scale. Docker has publicly been struggling for cash for years. If they’re stuck on expensive clouds from a bygone VC era, that’s on them. Affordable bandwidth is available.

My main complaint is:

They built open source tools used all over the tech world. And within those tools they privileged their own container registry, and provided a decade or more of endless and free pulls. Countless other tools and workflows and experiences have been built on that free assumption of availability. Similarly, Linux distros have had built-in package management with free pulling for longer than I’ve been alive. To get that rug-pull for open-source software is deeply disappointing.

Not only that, but the actual software hosted on the platform is other people’s software. Being distributed for free. And now they’re rent-seeking on top of it and limiting access to it.

I assume most offices and large commercial businesses have cached and other tools built into their tools, but for indie developers and small businesses, storage of a ton of binary blobs starts to add up. That’s IF they can even get the blobs the first time, since I imagine they could experience contention and queuing if you use many packages.

And many people use docker who aren’t even really aware of what they’re doing - plenty of people (myself included) have a NAS or similar system with docker-wrapping GUI pre-installed. My NAS doesn’t even give me the opportunity to login to docker hub when pulling packages. It’s effectively broken now if I’m on a CGNAT.

[+] rjst01|1 year ago|reply
Let me give you an alternative perspective.

My startup pays Docker for their registry hosting services, for our private registry. However, some of our production machines are not set up to authenticate towards our account, because they are only running public containers.

Because of this change, we now need to either make sure that every machine is authenticated, or take the risk of a production outage in case we do too many pulls at once.

If we had instead simply mirrored everything into a registry at a big cloud provider, we would never have paid docker a cent for the privilege of having unplanned work foisted upon us.

[+] wkat4242|1 year ago|reply
Hmm yes but if it is limited to 10 in an hour that could even be an issue for hobbyists if you update multiple dockers at the same time. For example the excellent matrix ansible playbook pulls numerous dockers in a single update run because every little feature is in a separate container. Same with home assistant add-ons. It's pretty easy to reach 10 in an hour. Even though you may not pull any for a whole month afterwards. I only do this once a month because most matrix bridges only get updates at that rate.

I have to say though, 90% of the dockers I use aren't on docker hub anymore. Most of them reside on the github docker repo now (ghcr.io). I don't know where the above playbook pulls from though as it's all automated in ansible.

And really docker is so popular because of its ecosystem. There are many other container management platforms. I think that they are undermining their own value this way. Hobbyists will never pay for docker pulls but they do generate a lot of goodwill as most of us also work in IT. This works the other way around too. If we get frustrated with docker and start finding alternatives it's only a matter of time until we adopt them at work too.

If they have an issue with bandwidth costs they could just use the infrastructure of the many public mirrors available that also host most Linux distros etc. I'm sure they'd be happy to add publicly available dockers.

[+] blitzar|1 year ago|reply
The entitlement of ... the VC powered powerplant that reinvented and reimagined electricity, gave out free electricity and put all the comeptitors out of business, succeeded in monopolizing electrity then come looking for a payday so they can pad the accounts and 'exit' passing it off to the next round of suckers. Truly unbelieveable.
[+] InsomniacL|1 year ago|reply
> Can't believe the sense of entitlement in this thread.

I don't use Docker so I genuinely don't know this...

Is the Docker Library built on the back of volunteers which is then used to sell paid subscriptions?

Does this commercial company expect volunteers to give them images for free which give their paid subscriptions value?

[+] jonhohle|1 year ago|reply
From a security and reproducibility perspective, you, shouldn’t want to pull directly. I’ve used Artifactory in the past as a pass through cache that can “promote” image, making them available to test and production environments as they go through whatever validation process is required. Then you know images (or packages, or gems, or modules, or whatever you are deploying) has at least been tested and an unpinned dependency isn’t going to surprise you in production.
[+] azalemeth|1 year ago|reply
I'm behind cgnat with half a city.

Limits per IPv4 address are really, really annoying. All I can do is flick on a VPN... which likely won't work either

[+] ComputerGuru|1 year ago|reply
No one is mad merely because there is a capped free service and an unlimited paid service offering.

The ire is because of the rug pull. (I presume) you know that. It’s predatory behavior to build an entire ecosystem around your free offering (on the backs of OSS developers) then do the good old switcheroo.

[+] WhyNotHugo|1 year ago|reply
There’s plenty of folks behind a CGNAT, sometimes shared with thousands of others. And this is more common in regions where actually paying for this service is often too expensive.

I’ve also seen plenty of docker-compose files which pull out this amount of images (typically small images).

I’m not saying that Docker Inc should provide free bandwidth, but let’s not also pretend that this won’t be an issue for a lot of users.

[+] thayne|1 year ago|reply
> unless you're in an apartment tower where all your neighbors are software engineers and you're all behind a CGNAT

Replace "apartment tower" with "CS department at a university", and you have a relatively common situation.

[+] qwertox|1 year ago|reply
Customary law exists for a reason.

If Docker explicitly offers a service for free, then users are well within their rights to use it for free. That’s not entitlement, that’s simply accepting an offer as it stands.

Of course, Docker has every right to change their pricing model at any time. But until they do, users are not wrong for expecting to continue using the service as advertised.

I've seen this "sense of entitlement" argument come up before, and to be clear: users expecting a company to honor its own offer isn’t entitlement, it’s just reasonable.

[+] zozbot234|1 year ago|reply
Is there an easy way of changing the default repository that's pulled from when you issue a 'docker pull <whatever>' command, or do you always have to make sure to execute 'docker pull <mycustomrepo.io/whatever>' explicitly?
[+] amonith|1 year ago|reply
You're absolutely right, but explaining the cost to the employer and/or the client and getting approvals to even use Docker will be a PITA. Currently for smaller clients of the software house I work for we (normal employees) were able to use Docker whenever we felt like without manager's approval to optimize the deployment and maintenance costs on our side.
[+] Mekoloto|1 year ago|reply
The entitlement comes from the status quo.

If the power company gave me free energy for 15 years, i would also be pissed. Rightly? No but hey thats not the issue.

Also with docker being the status quo for so long, it does hurt the ecosystem / beginners quite a lot.

[+] mikedelfino|1 year ago|reply
> why would you expect a commercial company to give you containers for free

Because they did. But you're right—they have no obligation to continue doing so. Now that you mention it, it also reminds me that GitHub has no such obligation either.

In a way, expecting free container images is similar to how we can download packages from non-profit Linux distributions or how those distributions retrieve the kernel tarball from its official website. So, I’m not sure whether it’s better for everyone to start paying Docker Hub for bandwidth individually or for container images to be hosted by a non-profit, supported by donations from those willing to contribute.

[+] boolemancer|1 year ago|reply
There's already a rate limit on pulls. All this does is make that rate limit more inconvenient by making it hourly instead of allowing you to amortize it over 6 hours.

10 per hour is slightly lower than 100 per 6 hours, but not in any meaningful way from a bandwidth perspective, especially since image size isn't factored into these rate limits in any way.

If bandwidth is the real concern, why change to a more inconvenient time period for the rate limit rather than just lowering the existing rate limit to 60 per 6 hours?

[+] lovasoa|1 year ago|reply
If the electricity were generated by thousands of volunteers pedalling in their basement, then yes, I would expect the utility company not to be too greedy.
[+] hansmayer|1 year ago|reply
Not a huge fan of Docker as a company in general, but this is spot on- the DockerHub free tier is still quite generous for private/hobby usage actually - if you are a professional user, well you should very well be having your own professional solution, either your own internal registry or a commercial SaaS registry.
[+] kcb|1 year ago|reply
Isn't this a problem even with a cache? Only being able to cache 10 images an hour is still horribly limiting.
[+] johnnyo|1 year ago|reply
My immediate thought was college kids on campus.

It’s basically your apartment building example (esp. something like the STEM dorms)

When this stuff breaks in the hours leading up to a homework assignment being due, it’s going to discourage the next generation of engineers from using it.

[+] mschuster91|1 year ago|reply
> I guess people think bandwidth grows on trees.

Bandwidth is cheap, especially at scale, unless you're in one of the large clouds that make a shitload of money gouging their customers on egress fees.

I don't say that Docker Inc should foot the bill for other multibillion dollar companies, but the fact that even after 8 years it still is impossible to use authentication in the registry-mirrors option is mind-boggling.

[1] https://github.com/moby/moby/issues/30880

[+] est|1 year ago|reply
> I guess people think bandwidth grows on trees.

I think Docker started the bloated image mess. Have you ever seen a project with <100MB in size?

Guess pack everything with gzip isn't a good idea when size matters.

Docker Hub have a traffic problem, so does every intranet image registry. It's slow. The culprit is Docker (and maybe ppl who won't bother to optimize)

[+] spockz|1 year ago|reply
Until you have this one weird buildpak thing that for some unfathomable reason keeps downloading all the toolchain layers all the time for each build of the app.

Then again, good that this measure forces fixing this bad behaviour, but as a user of buildpack you are not always in the know how to fix it.

[+] axegon_|1 year ago|reply
It kind of depends. To a degree you are right, but not entirely. For the past two months for instance I've been making a huge push to de-cloud-ify myself entirely and self-host everything. I do have the bandwidth and I do have the hardware that is needed. Having said that, I am not making this whole thing little by little but whenever I have time. There were times when I was pulling 30 images/hour and it's clearly a one-off thing. While corporations are certainly abusing docker's generosity, in practice, the people that pull hundreds of images on hourly basis is astronomically low - most commonly one-off things, much like what I am doing. I've worked in similar environments and the abusers are the exception rather than the rule. The bottom line is, this genuinely feels like some half-assed biz-dev decision, promising to cut costs by 20%. Been there, done that. In the long run, those quick cuts ended up costing a lot more.
[+] blinded|1 year ago|reply
100%

Adding auth to pulls is easy. Mirroring images internally is easy. anyone that says otherwise is lazy.

[+] randomNumber7|1 year ago|reply
Idk. I have seen stuff like GitHub and "pulled" open software with apt-get for years. So I got the impression that there are free usable services.
[+] j4nek|1 year ago|reply
> Can't believe the sense of entitlement in this thread. I guess people think bandwidth grows on trees.

bandwidth is super cheap if you dont use any fancy public cloud services.

[+] aiiizzz|1 year ago|reply
The amount of work this creates on fixing CI is going to be absurd.
[+] gregors|1 year ago|reply
So yeah you can say it's entitlement but if you build your business in one way and then change the fundamental limits AFTER you've gotten market saturation you really shouldn't be shocked at complaints. It's their fault because they fostered the previous user behavior.

People understand that bandwidth costs money but that seems to have been priced in to their previous strategy or they did it knowingly as a loss leader to gain market share. If they knew this was a fundamental limitation they should have addressed it years ago.

[+] sigy|1 year ago|reply
I host OSS images there, and I see no notice about how they will be affected. If they limit access to my published images, then it will be an issue. In that case the benefit and thus incentive for many of the projects which have made docker and docker hub pervasive goes away. Without that adoption, there would probably be no docker hub today.

This should help people understand a bit better why this feel a bit underhanded. The images are free, and I and many other OSS devs have used docker hub in partnership to provide access to software, often paying for the ability to publish there. In this case, any burden of extra cost was on the producer side.

Turning this into a way to "know" every user and extract some value from them is their prerogative, but it does not feel like it is good faith. It also feels a bit creepy in the sense of "the user is the product".

[+] hedora|1 year ago|reply
For years, people have been trying to add a “override the default registry because docker hub is a single point of failure” option to the docker client.

Upstream has blocked it. A fork over this one little feature is long overdue.

[+] mattgreenrocks|1 year ago|reply
This gets at something I've never quite understood about Docker. And it might be a dumb question: why do we need a dedicated host for Docker images in the first place?

I can see the use case for base images: they're the canonical, trusted source of the image.

But for apps that are packaged? Not as much. I mean, if I'm using a PaaS, why can't I just upload my Docker image to them, and then they store it off somewhere and deploy it to N nodes? Why do I have to pay (or stay within a free tier) to host the blob? Many PaaS providers I've seen are happy to charge a few more bucks a month just to host Docker images.

I'm not seeing any sort of value added here (and maybe that's the point).

[+] suryao|1 year ago|reply
This sucks for individuals and open source. For folks that have a heavy reliance on dockerhub, here are some things that may help (not all are applicable to all use cases):

1. Setup a pull through mirror. Google Artifact Registry has decent limits and good coverage for public images. This requires just one config change and can be very useful to mitigate rate limits if you're using popular images cached in GAR.[1]

2. Setup a private pull through image registry for private images. This will require renaming all the images in your build and deployment scripts and can get very cumbersome.

3. Get your IPs allowlisted by Docker, especially if you can't have docker auth on the servers. The pricing for this can be very high. Rough numbers: $20,000/year for 5 IPs and usually go upwards of $50k/year.

4. Setup a transparent docker hub mirror. This is great because no changes need to be made to pipelines except one minor config change (similar to 1). We wrote a blog about how this can be done using the official docker registry image and AWS.[2] It is very important to NOT use the official docker registry image [3] as that itself can get throttled and lead to hairy issues. Host your own fork of the registry image and use that instead.

We spent a lot of time researching this for certain use cases while building infrastructure for serving Github actions at WarpBuild.

Hope this helps.

[1] https://cloud.google.com/artifact-registry/docs/pull-cached-...

[2] https://www.warpbuild.com/blog/docker-mirror-setup

[3] https://hub.docker.com/_/registry

[+] grandempire|1 year ago|reply
GitHub culture has gone a little crazy with things like CI - assuming these cloud providers will always be up and providing their services for free.

If your project can’t afford to pay for servers and sometime to maintain it, I think we should stick with local shell scripts and precommit hooks.

[+] mrweasel|1 year ago|reply
The 10 pulls per IP per hour isn't my main concern. 40 pulls per hour for an authenticated user may be a little low, if you're trying out something new.

The unauthenticated limit doesn't bother me as much, though I was little upset when I first saw it. Many business doesn't bother setting up their own registry, even though they should, nor do they care to pay for the service. I suspect that many doesn't even know that Docker can be used without Docker Hub. These are the freeloaders Docker will be targetting. I've never worked for company that was serious about Docker/Kubernetes and didn't run their own registry.

One major issue for Docker is that they've always ran a publicly available registry, which is the default and just works. So people have just assumed that this was how Docker works and they've never bothered setting up accounts for developers nor production systems.

[+] __MatrixMan__|1 year ago|reply
It seems like a good time to point out that oci images' layer-based caching system is incredibly bandwidth inefficient. A change to a lower layer invalidates all layers above it, regardless of whether there's actually any dependency on the changed data.

With a competent caching strategy (the sort of thing you'd set up with nix or bazel) it's often faster to send the git SHA and build the image on the other end than it is to move built images around. This is because 99% of that image you're downloading or pushing is probably already on the target machine, but the images don't contain enough metadata to tell you where that 1% is. A build tool, by contrast, understands inputs and outputs. If the inputs haven't changed, it can just use the outputs which are still lying around from last time.

[+] no_wizard|1 year ago|reply
When Docker went hard on subscriptions, my company pivoted to Rancher Desktop as the replacement.

I can't stress enough how much I dislike Rancher. I know we moved to it as a cost saving measure as I am assuming we would have to buy subs for Docker.

Yet there is nothing I found easier to use than Docker proper. Rancher has a Docker compatible mode and it falls down in various ways.

Now that this has happened, I wonder if Rancher is pulling by default from the Docker Hub registry, in which case now we'll need to setup our own registry for images we use, keep them up to date etc. Which feels like it would be more costly than paying up to Docker to begin with.

All this makes me almost miss Vagrant boxes.

[+] JamesMcMinn|1 year ago|reply
The storage costs coming in from 1st March feel like they're going to catch a lot of organisations out too. Private repos will cost $10/month per 100GB of storage, something that was previously not charged for. We're in the middle of a clear out because we have several TB of images that we'd rather not pay for on top of the existing subscription costs.
[+] pcthrowaway|1 year ago|reply
> 10 per IPv4 address or IPv6 /64 subnet

Finally, a use for IPv6!

I assume so anyway, as I think ISPs that support ipv6 will give you multiple IPv6 /64 spaces if requested.

[+] dustrider|1 year ago|reply
It’s their business choice, but they’re no longer the only option, nor in my opinion the best one.

Vote with your feet and your wallets.

[+] PeterZaitsev|1 year ago|reply
If you're getting something for free... you should ask a question who and how is actually paying for it. Facebook can give you lots of stuff for free... because they can show you ads and use that awesome data for various purposes

Docker can't really market to machines doing most of downloads autonomously and probably can't monetize download data well to, so they want you to start paying them... or go use something else.

If I read these limits correctly, looks like lots of things are going to break on March 1st

[+] jillesvangurp|1 year ago|reply
Some obvious mitigations: don't depend on docker hub for publishing, use mirrors for stuff that does depend on that, use one of the several docker for desktop alternatives, etc. No need to pay anyone. Chances are that you already use a mirror without realizing it if you are using any of the widely used cloud or CI platforms.

Can one of the big tech companies please use their petty cash account to acquire what remains of docker.com? Maybe OSS any key assets and donate docker hub, trademarks, etc. to some responsible place like the Linux Foundation which would be a good fit. This stuff is too widely used to leave taken hostage by an otherwise unimportant company like Docker. And the drama around this is getting annoying.

MS, Google, AWS, anyone?

Alternatively, let's just stop treating docker.io as a default place where containers live. That's convenient for Docker Inc. but not really necessary otherwise. Docker Inc is overly dependent on everybody just defaulting to fetching things without an explicit registry host from there. And with these changes, you wouldn't want any of your production environments be dependent on that anyway because those 429 errors could really ruin your day. So, any implied defaults should be treated like what they are: a user error.

If most OSS projects stop pushing their docker containers to docker hub and instead spin up independent registries, most of the value of docker hub evaporates. Mostly the whole point of putting containers there was hassle free usage for users. It seems that Docker is breaking that intentionally. It's not hassle free anymore. So, why bother with it at all? Plenty of alternative ways to publish docker containers.

[+] mmbleh|1 year ago|reply
These dates have been delayed. They will not take effect March 1. Pull limit changes are delayed at least a month, storage limit enforcement is delayed until next year.
[+] yencabulator|1 year ago|reply
A big part of the problem is that Docker has insisted on there being a "registry". In a better, more open, world, a container image would just be a thing fetched over HTTPS from anywhere that can serve large-ish files.