The narrative seems quite clear to me. They released the tooling and the services to become the defacto solution, and then Swarm was supposed to be the cashcow that turned that into cashflow. And then k8s happened.
They've raised a tonne of capital, and it probably looked sane at the time. And now they're grasping at straws trying to figure out how else they can turn this into revenue.
A lot of the recent narrative has been worded like pivoting into a glorified webhost was their evil plan all along. It's not, it's an act of desperation.
I feel like all that VC money was actually their undoing. Instead of working with the community, Docker spent bucketloads of money acquiring a lot of community projects and startups in an obvious attempt to become an end-to-end container solution company. But, they didn't have a plan and ended up killing most of those acquisitions. To me, it felt like Docker was using all the money they got to squash the community instead of working with it.
Every year at DockerCon, there would be flashy announcements that went nowhere. As a developer, those years from 2013 to 2017 were both super exciting and super frustrating. Everything started falling apart when Docker (the project) got split into Moby for open source and the rest went commercial. Docker started to sell Docker Swarm (the original), only to kill it a year later with a new Docker Swarm (what we have today). Then, Kubernetes started growing traction, leapfrogging both Docker Swarms, Mesos, and others in adoption. They never had a cohesive commercial plan. Just lots of empty promises and burned bridges.
When I think of Docker (the company), I feel bitter about all the projects they killed in their attempt to own the market. I love using Docker (the software), but the company's just one big disappointment.
Speaking personally, the fate of Docker, Inc. was clear to me when they took their $40M Series C round in 2014. I had met with Solomon in April 2014 (after their $15M Series B) and tried to tell him what I had learned at Joyent: that raising a ton of money without having a concrete and repeatable business would almost inevitably lead to poor decision making.
I could see that I was being too abstract, so I distilled it -- and I more or less begged him to not take any more money. (Sadly, Silicon Valley's superlative, must-watch "Sand Hill Shuffle" episode would not air until 2015, or I would have pointed him to it.) When they took the $40M round -- which was an absolutely outrageous amount of capital to take into a company that didn't even have a conception of what they would possibly sell -- the future was clear to me.
I wasn't at all surprised when the SVP washouts from the likes of VMware and IBM landed at Docker -- though still somehow disappointed when they behaved predictably, accelerating the demise of the company. May Docker, Inc. at least become a business school case study to warn future generations of an avoidable fate!
It always seems like VCs. They did great and started growing a lot. More VCs jumped into the next possible “unicorn” and they had even more money. They had to do stuff with it.
But they weren’t paying back fast enough, and K8s was making waves. Better make money fast while you can. Time for the squeeze play so the VCs can win.
If they had been allowed to grow at a more natural rate maybe it would all be fine. If they were allowed to be happy with a 40% share of a big future market, things could be great.
We are still using Docker Swarm in production. It seems to be working fine so I always wondered why it never took off. But I am not a Devop. Can sb please give some insight on why kubernetes took off instead and why Dock Inc. failed with its cloud product?
I know I will get downvoted for this as off topic, but this is just the latest blog we've seen in this top 30 of many that show ZERO regard for legibility. Yes, I can zoom my browser, but c'mon.
A 13px font for paragraph text is nearly hostile. It's not that legible to people with perfect eyesight, but then it's not at all legible to anyone with imperfect eyesight. It's like saying you don't care if anyone who reads your blog would struggle doing that. And given how very simple it is to change, it's kind of insulting, specifically given how many years usability has been a thing.
10 years ago I wouldn't have written this comment. But now this isn't how you behave if you have an audience.
This website doesn't even set a font size. The font size is just your browser's default. You have your font size set too small. This website is one of the few examples of doing it right. How can a web developer know what your requirements are? Only you can know that. There are many bad things about the modern web but fortunately being able to set your own font size is still a thing.
Firefox even lets you set a minimum font size. And there is an option to stop websites overriding your choices which helps with sites like HN (but not the one you are complaining about) which explicitly set a small font size.
Not sure what you are talking about, the font seems much bigger than the one on hacker news and pretty standard sized for a website or a desktop (which usually has default font size at 11 or 12px).
Besides:
- Nearly if not all desktops allow to scale. If your eyeseight is that bad you should set it at desktop level anyway.
- All browsers and terminal emulators allows one to use his own defined fonts and size.
- Nearly if not all browsers and terminal emulators now allows you to zoom dynamically for that odd website and keep that preference.
- Firefox has reader mode, I guess similar extensions exists for most browsers.
> And given how very simple it is to change, it's kind of insulting, s
Changing by which size? 16px, 32px, 64px? There is no single form of universality regarding eyesight. And I would argue that if your eyeseight is bad, the solution are prescription glasses, not websites with huge fonts.
More annoying to me than the font size is the font selection. Maybe it's because I didn't grow up using monochrome terminals, but for me monospace fonts are generally terrible for legibility (except in places like coding where lining things up vertically is useful). Typographically, the font designers have to make all sorts of readability sacrifices to make all characters the same width.
In its weird death spiral, if Docker Inc. were to be bought out by Microsoft, I shudder to think how much of the dev ecosystem would yet again depend on Microsoft's good graces to shoulder the burden of storage and data transfer costs for building products. They already do npm and Github (+ Github Container Registry) so they have some standing in being stewards in this space.
On the plus side, it would perhaps give enterprises more confidence about their build pipelines remaining dependent on Docker Hub, maybe even being more comfortable paying for it.
On the flip side, far too much of the dev ecosystem would depend on Microsoft, the famed supervillain of open communities. EDIT: With that sense in mind, I am indeed rooting for Docker Inc. to succeed.
> I shudder to think how much of the dev ecosystem would yet again depend on Microsoft's good graces to shoulder the burden of storage and data transfer costs for building products
Does that hint that the model of sending around megabytes-to-multigigabytes of VMs is inherently too expensive to maintain as a backbone for an awesome tool?
For the same reason, I wonder why provides Maven Central and NPM repositories, whether they will do it for free, but at least those are billions of small jars, not hundreds of thousands of gigabyte-sized VMs.
> How is it that Docker Inc., creator of one of the most important and ubiquitous tools in the modern software industry, has become such a backwater of rent-seeking and foot-shooting?
My guess: Because not all good ideas are profitable. Especially in software.
I read most but not all of the article, so if I missed this already being stated, that’s egg on my face.
As a newcomer to the devops world I was kind of surprised at the general thesis of this article, that companies use docker hub and using something different is awkward. Neither of the two companies I’ve worked for use it (artifacory in both cases) and there is a general taboo around having docker desktop binaries on any company systems (though docker engine seems to be prevalent). I guess I had just assumed that the golden / default path was to use one of the (non docker) commercial registries. So from that perspective, the suggestion that there are some patterns that still use dockerhub by default was actually enlightening.
as a 1 man shop i actually went back to docker hub after my DO based registry started flaking. tbf i was self hosting the registry because I'm cheap but the images were in do spaces. worked fine for a couple years then became unreliable. knowing not much else i decided to give hub the $5/mo or whatever to save me the pain.
not looking forward to figuring out what magic auth keys i need to reconfigure to pull from somewhere else now
> In particular, the union file system (UFS) image format is a choice that seems more academically aspirational than practical. Sure, it has tidy properties in theory, but my experience has been that developers spend a lot more time working around it than working with it.
What is the alternative that is better? The ability to have layers that build on top of each other and can be cached is a big feature... what alternatives provide that and are better?
IMO image definitions should be a list of mounts that may be overlays on root but may also be more “normal” mounts to directories within root. I should be able to make an image that is ubuntu:bionic plus a conda installation at /opt/conda plus a personal package at /usr/local/mything. Currently you have to decide on how to stack those layers, which is unnatural and prevent sharing/deduplication of partial-file system images where there’s no reason to prevent it.
Taken to the extreme, look at something like Nix (or conda, come to think of it). Why can’t I just have one copy of a package of a given version shared by all containers, if they all want that package? Unix file systems should be great at that kind of composibility; that’s the advantage of a unified tree instead of a tree-per-source. But in the docker model, you’re stuck with a stack.
My ideal image definition is a hybrid between docker’s immutable hash-addressed image layers and an fstab file to describe how and where to mount them all.
The POSIX standard requires certain behaviours from the filesystem, that POSIX-compliant software can rely on.
Unfortunately, those behaviours are mutually exclusive with transparent layering.
It's certainly possible to build a file-system whose behaviours are compatible with that kind of transparent layering - Plan9 was built on exactly that model, for example - but then it wouldn't be a POSIX-compliant filesystem anymore.
The promise of Docker was that you'd be able to deploy your existing applications in a more reliable, repeatable way, but that breaks down when you have to tinker with your application's file-handling code, or jump through extra hoops to flatten the layers of your container's filesystem image.
* The general idea of mixing together filesystems+folders to achieve re-use/sharing/caching.
* The "Dockerfile" approach to this - with its linear sequence of build-steps that map to a linear set of overlays (where each overlay depends on its predecessor).
The "Dockerfile" approach is pretty brilliant in a few ways. It's very learnable. You don't need to understand much in order to get some value. It's compatible many different distribution systems (apt-get, yum, npm, et al).
But although it's _compatible_ with many, I wouldn't say it's _particularly good_ for any one. Think of each distribution-system -- they all have a native cache mechanism and distribution infrastructure. For all of them, Dockerization makes the cache-efficacy worse. For decent caching, you have to apply some adhoc adaptations/compromises. (Your image-distribution infra also winds up as a duplicate of the underlying pkg-distribution infra.)
Here's an alternative that should do a better job of re-use/sharing/caching. It integrates the image-builder with the package-manager:
Of course, it trades-away the genericness of a "Dockerfile', and it no doubt required a lot of work to write. But if you compare it to the default behavior or to adhoc adaptations, this one should provide better cache-efficacy.
(All this is from POV of someone doing continuous-integration. If you're a downstream user who fetches 1-4 published image every year, then you're just downloading a big blob -- and the caching-layering stuff is kind of irrelevant.)
I’m someone who had a front row seat to the emergence of Docker, and some might say competed with them (I’d disagree on that point). I don’t plan on commenting on their company, business model, or recent decisions. The only thing I want to comment on is the claim Docker was evolutionary, not revolutionary.
I disagree, I believe Docker /was/ revolutionary. And I feel like I see heavy technologists make this sort of dismissal based on technical points too soon. From a technical perspective, it was arguably evolutionary — a lot of people were poking at LXC and containerization a long time before Docker came around — but from a product perspective it was surely revolutionary.
I used to joke, in my own experience building a business in the DevOps space, that you’d spend 2 years building a globally distributed highly scalable complex piece of software, and no one would pay for it. Then you slap a GUI on it, and suddenly someone is willing to pay a million dollars for it. Now, that’s mostly tongue in cheek, but there is a kernel of truth to it.
The kernel of truth is that the technology itself isn’t valuable; it’s the /humanization/ of a technology, how it interfaces with the people who use it every day.
So what Docker did that was revolutionary was take a bunch of disparate pieces, glue them together, and put an incredible user experience on top of it so that that technology was now instantly available in minutes to just about anyone who cared.
At some point in the article, the author says it’s maybe something about a “workflow.” I’m… highly biased to say yes, absolutely. One of my core philosophies (that became the 1st point of the Tao of the company I helped start) is “workflows, not technologies.” When I talk about it, I mean it in a slightly different way, but it’s highly related: the workflow is super valuable for adoption, the technology is to a certain extent, but less so.
Technology enthusiasts (hey, I’m one of you!) usually hate to hear this. We all want to think you build the best thing or a revolutionary thing and then it just wins. That’s sometimes, but rarely, the case. You need that aspect, and you ALSO need timing to be right, the interface to be right, the explanation to be right, etc. Docker got this all right.
(Now, turning the above success into a business is a whole different can of worms, and like I said in the first paragraph, I don’t plan on commenting.)
For the author: I don’t mean any offense by this. I mostly agree with the other points of your post. The “FROM” being revolutionary I was nodding quite vigorously. Being able to “docker run ubuntu” was super magical, etc. I mostly wanted to point this because I see MANY technologists dismiss the excitement of technologies purely on the basis of technology over, and over, and over again, and the sad thing is its just one part of a much bigger package.
I'm not sure that we really disagree, but I wrote this sort of late and I also think I wasn't entirely clear. The point I was trying to make is that the "container runtime" part of Docker is a lot less important than the tooling they put around it, and they made Docker Hub a very core part of that broader ecosystem.
>The kernel of truth is that the technology itself isn’t valuable; it’s the /humanization/ of a technology, how it interfaces with the people who use it every day.
The Docker saga teaches us the significance of default settings, the relationship between free and paid software services, and the need to consider the economic implications of relying on free services provided by a company, as these can alter over time.
How does the Docker story compare to NPM who are also freely hosting a bunch if stuff, heavily downloaded and relying on some paid users but mostly free. And NPM has “competing” repositorys too. Could the same happen with NPM where they need to charge?
I get that NPM packages are smaller than docker images typically.
This is a great point but do note that it is quite easy to setup [1] a private npm registry as well. Most orgs actually do just that as you really do not want a production build failing if npm goes down.
Yes, Solaris was doing "containers" in the mainstream before Linux but pointing that out as a response to articles like these misses the point of how and why Docker exploded; it was Docker's UI, the user experience and Docker Hub that really unlocked the full potential of the technology.
Solaris containers didn't have anything like Docker Hub nor was setting them up as easy as "docker run".
Nomad is awesome and works at scale. The engineers continue to battle harden it and it’s a joy to work with. You do have to manage things like service discovery (usually with consul) and traffic routing separately - but the integration with vault is sublime.
About the only real negative of Nomad is that it doesn’t have the mindshare that k8s does, so you don’t see the amount of developer engagement in extending it the way you do in the k8s SIGs. Also, being an expert in Nomad doesn’t give you the same number of career opportunities, and on the other side - there aren’t umpteen thousand nomad SREs the way there are with k8s - so getting someone up to speed can take a couple months (but this system is very well defined, well documented, and small enough that any half talented engineer can master it very quickly)
Nomad does have the very important advantage that Hashicorp stands behind the product - so if anything goes awry, you’ve got a support team and escalation that will jump on and root cause/resolve any issue, usually within a matter of hours and even in the really squirrelly cases (that you are only likely to see when when you are managing many, many thousands of nodes in a cluster) within days.
They have expertise and have visibility. If I were them, I would extend docker-compose with a cloud version that runs flawlessly including stateful workloads with backup and restors and charge for that. Heroku but even more simplified.
You change your docker-compose, push and we detect via webhook and deploy. Logs, metrices everything from command line with bubble tee or something.
Most companies have brilliant engineers and shortsighted, incompetent out of touch product teams.
> There's been a lot of discussion lately about Docker, mostly about their boneheaded reversal following their boneheaded apology for their boneheaded decision to eliminate free teams.
So making a bad decision is bad, but admitting it was a bad decision and reversing it is also bad?
You lose a lot of goodwill from the community if you refuse to change the code that a `docker pull image` no longer defaults to hub.docker.com AND then start to monetize teams, ESPECIALLY non-commercial open-source teams.
If they would mandate a registry then many more people would host their own and take the load off of their system.
But no, they want to have it all. And yea, they can. That I don't care about.
But you can't make a change, and walk back from it, and expect people to be happy, given the story that came before all of this.
Admitting it was bad and reversing it is (usually) better than just letting the original bad decision stand, but you can't erase what you've done. You still publicly decided to do something that caused people to lose trust and faith in you. People will wonder if you're going to make other bad decisions, and then not walk them back when people tell you how bad those decisions are.
There are also good and bad ways to apologize and change your mind. I don't have an opinion as to whether or not Docker's apology and reversal were done well, but I think it's fair that some could believe they weren't.
Announcing it damaged their reputation. Reversing doesn't undo that (because there's always the chance they'll do it again), but now they don't even have the benefit of not having to host so many images.
What went bad is a little thing called trust. From the community.
Probably, especially from those who think in the long term. Like those who builds things for themselves. They don't like to read news from Docker every day and keep in mind that their project images and even base images can just disappear overnight. It's too expensive for them to track docker decisions. It just takes resoures.
Making bad decisions is not bad at all. Losing trust is.
> Docker images are relatively large, and Docker Hub became so central to the use of Docker that it became common for DevOps toolchains to pull images to production nodes straight from Docker Hub
Not only that, but it was actively encouraged by all Docker fanbois to pull as soon as you can. When I saw Watchtower the first time I was just speechless.
> Docker Inc.'s goal was presumably that users would start using paid Docker plans to raise the quotas but, well, that's only attractive for users that either don't know about caching proxies or judge the overhead of using one to be more costly than Docker Hub... and I have a hard time picturing an organization where that would be true.
But that achieved their goal too?
They wanted to reduce loses from bandwidth costs, that works by either making the users pay, or use less bandwidth.
Question: Possible for Docker to die as a company; VC's lose their money; the technology survives and is still the mainstay? if the answer is no, what's the future and what do you expect the timeline will be? have a probability of that actually occurring?
I don’t know how Docker Hub falling by the wayside plays out. I suppose most cloud providers really should offer their own container image repo mirrors or something instead. But it’ll be painful
Wild to me that Docker inc. Doesn’t just charge to pull prebuilt images. Just send docker files!
> Still, the point of this tangent about Docker Desktop is that Docker's decision to monetize via Desktop---and in a pretty irritating way that caused a great deal of heartburn to many software companies---was probably the first tangible sign that Docker Inc. is not the benevolent force that it had long seemed to be. Suddenly Docker, the open-source tool that made our work so much easier, had an ugly clash with capitalism.
> Docker Hub, though, may yet be Docker's undoing. I can only assume that Docker did not realize the situation they were getting into. Docker images are relatively large, and Docker Hub became so central to the use of Docker that it became common for DevOps toolchains to pull images to production nodes straight from Docker Hub. Bandwidth is relatively expensive even before cloud provider margins; the cost of operating Docker Hub must have become huge. Docker Inc.'s scaffolding for the Docker community suddenly became core infrastructure for endless cloud environments, and effectively a subsidy to Docker's many users.
I'm not sure why they couldn't have been a bit more aggressive about monetization from the start?
DockerHub could have been free for an X amount of storage, with image retention of Y days by default, with Z amount of traffic allowed per month. The Internet Archive tells me that they got this half right, with "unlimited public repos" being where things went wrong: http://web.archive.org/web/20200413232159/https:/hub.docker....
> The basics of Docker for every developer, including unlimited public repos and one private repo.
For all I care, Docker Desktop might have just offered a CLI solution with the tech to run it (Hyper-V or WSL2 back ends) for free, but charge extra for the GUI and additional features, like running Kubernetes workloads. BuildKit could have been touted as an enterprise offering with immense power for improving build times, at a monetary cost.
Perhaps it all was in the name of increasing adoption initially? In a sense, I guess they succeeded, due to how common containers are. It is easy to wonder about these things after the fact, but generally people get rather upset when you give them things for free and later try to take them away, or introduce annoyances. Even if you cave to the feedback and roll back any such initiatives, the damage is already done, at least to some degree.
That said, I self host my own images in a Nexus instance and will probably keep using Docker as the tooling/environment, because for my personal stuff I don't have a reason to actually switch to anything else at the moment and Docker itself is good enough. Podman Desktop and Rancher Desktop both seem viable alternatives for GUI software, whereas for the actual runtimes and cloud image registries, there are other options, though remember that you get what you pay for.
You grow WAY faster with a free product. There is no downside for people trying it out.
If it was paid, even a small amount, that’s a hurdle for people. Plus people avoiding it would have created more/stronger competing products as they had more incentive.
Get a ton of users then try to monetize later is a very common SV play for VC backed companies.
>I'm not sure why they couldn't have been a bit more aggressive about monetization from the start?
I'm not sure there is that much money in running a glorified specialized S3. Charging for disk space is terrible - the people who have money would probably just setup a private repo on S3 where it's cheaper and the people who don't aren't going to pay you.
For the amount of money they raised I don't think that would have been a convincing story to tell.
I think we're past the point where key players like AWS have _ran with_ the technology Docker provided and did not pay their fair share in the process.
Docker as a company may be a joke, but I don't think the software will be nearly as nice to use without them. I think it's ridiculous that so many asshats are jumping on the hate Docker (the company) bandwagon without understanding how much they have been taken advantage of by the big players who can absolutely support them, but choose not to.
Sometimes I am so disappointed at how much ego still exists in tech. We're supposed to be more educated than the folks who came before us, yet we're doing a worse job.
> Docker as a company may be a joke, but I don't think the software will be nearly as nice to use without them. I think it's ridiculous that so many asshats are jumping on the hate Docker (the company) bandwagon without understanding how much they have been taken advantage of by the big players who can absolutely support them, but choose not to.
As much as I do not condone said big player's actions here, the whole system just doesn't reward "doing the right thing" as a general rule. If the licensing allowed it, they were in their right to do it. Even if they did do their right thing and support them, their competitor may not have. The morality element just doesn't have much weight the way things are.
No one will pay unless you force them. We can wish all we want that the world is different but I've seen this over and over. You need to hold something back from day one or you'll never make money.
The Docker management team needs visionaries. Someone who actually understands what Docker can truly be. Right now they are just trying to milk the cow before it can even produce milk.
soneil|2 years ago
They've raised a tonne of capital, and it probably looked sane at the time. And now they're grasping at straws trying to figure out how else they can turn this into revenue.
A lot of the recent narrative has been worded like pivoting into a glorified webhost was their evil plan all along. It's not, it's an act of desperation.
kinghuang|2 years ago
Every year at DockerCon, there would be flashy announcements that went nowhere. As a developer, those years from 2013 to 2017 were both super exciting and super frustrating. Everything started falling apart when Docker (the project) got split into Moby for open source and the rest went commercial. Docker started to sell Docker Swarm (the original), only to kill it a year later with a new Docker Swarm (what we have today). Then, Kubernetes started growing traction, leapfrogging both Docker Swarms, Mesos, and others in adoption. They never had a cohesive commercial plan. Just lots of empty promises and burned bridges.
When I think of Docker (the company), I feel bitter about all the projects they killed in their attempt to own the market. I love using Docker (the software), but the company's just one big disappointment.
ignoramous|2 years ago
I could see that I was being too abstract, so I distilled it -- and I more or less begged him to not take any more money. (Sadly, Silicon Valley's superlative, must-watch "Sand Hill Shuffle" episode would not air until 2015, or I would have pointed him to it.) When they took the $40M round -- which was an absolutely outrageous amount of capital to take into a company that didn't even have a conception of what they would possibly sell -- the future was clear to me.
I wasn't at all surprised when the SVP washouts from the likes of VMware and IBM landed at Docker -- though still somehow disappointed when they behaved predictably, accelerating the demise of the company. May Docker, Inc. at least become a business school case study to warn future generations of an avoidable fate!
- Bryan Cantrill, https://news.ycombinator.com/item?id=28460504
MBCook|2 years ago
But they weren’t paying back fast enough, and K8s was making waves. Better make money fast while you can. Time for the squeeze play so the VCs can win.
If they had been allowed to grow at a more natural rate maybe it would all be fine. If they were allowed to be happy with a 40% share of a big future market, things could be great.
But that’s not the VC-shoot-for-the-moon way.
G3rn0ti|2 years ago
> be the cashcow that turned
> that into cashflow. And then
> k8s happened.
We are still using Docker Swarm in production. It seems to be working fine so I always wondered why it never took off. But I am not a Devop. Can sb please give some insight on why kubernetes took off instead and why Dock Inc. failed with its cloud product?
pjmlp|2 years ago
I rather have to deal with WebSphere 6.1 yet again, than a poorer replication of its developer experience with lesser tooling.
Julesman|2 years ago
A 13px font for paragraph text is nearly hostile. It's not that legible to people with perfect eyesight, but then it's not at all legible to anyone with imperfect eyesight. It's like saying you don't care if anyone who reads your blog would struggle doing that. And given how very simple it is to change, it's kind of insulting, specifically given how many years usability has been a thing.
10 years ago I wouldn't have written this comment. But now this isn't how you behave if you have an audience.
globular-toast|2 years ago
Firefox even lets you set a minimum font size. And there is an option to stop websites overriding your choices which helps with sites like HN (but not the one you are complaining about) which explicitly set a small font size.
ghshephard|2 years ago
prmoustache|2 years ago
Besides:
- Nearly if not all desktops allow to scale. If your eyeseight is that bad you should set it at desktop level anyway.
- All browsers and terminal emulators allows one to use his own defined fonts and size.
- Nearly if not all browsers and terminal emulators now allows you to zoom dynamically for that odd website and keep that preference.
- Firefox has reader mode, I guess similar extensions exists for most browsers.
> And given how very simple it is to change, it's kind of insulting, s
Changing by which size? 16px, 32px, 64px? There is no single form of universality regarding eyesight. And I would argue that if your eyeseight is bad, the solution are prescription glasses, not websites with huge fonts.
ronyeh|2 years ago
And yes, I do believe that Hacker News needs a bigger default font.
(I'm getting old. You will too!)
satiric|2 years ago
mamediz|2 years ago
unknown|2 years ago
[deleted]
dingusdew|2 years ago
[deleted]
blowski|2 years ago
sirius87|2 years ago
On the plus side, it would perhaps give enterprises more confidence about their build pipelines remaining dependent on Docker Hub, maybe even being more comfortable paying for it.
On the flip side, far too much of the dev ecosystem would depend on Microsoft, the famed supervillain of open communities. EDIT: With that sense in mind, I am indeed rooting for Docker Inc. to succeed.
BossingAround|2 years ago
You can always use Podman. We already have fully OSS solutions in the container space.
eastbound|2 years ago
Does that hint that the model of sending around megabytes-to-multigigabytes of VMs is inherently too expensive to maintain as a backbone for an awesome tool?
For the same reason, I wonder why provides Maven Central and NPM repositories, whether they will do it for free, but at least those are billions of small jars, not hundreds of thousands of gigabyte-sized VMs.
Dalewyn|2 years ago
FOSS: "Never thought I'd die fighting side by side with Microsoft."
Microsoft: "What about side by side with a friend?"
FOSS:
8n4vidtmkvmk|2 years ago
Waterluvian|2 years ago
My guess: Because not all good ideas are profitable. Especially in software.
I read most but not all of the article, so if I missed this already being stated, that’s egg on my face.
ghshephard|2 years ago
BossingAround|2 years ago
EamonnMR|2 years ago
8n4vidtmkvmk|2 years ago
cortesoft|2 years ago
What is the alternative that is better? The ability to have layers that build on top of each other and can be cached is a big feature... what alternatives provide that and are better?
rcoveson|2 years ago
Taken to the extreme, look at something like Nix (or conda, come to think of it). Why can’t I just have one copy of a package of a given version shared by all containers, if they all want that package? Unix file systems should be great at that kind of composibility; that’s the advantage of a unified tree instead of a tree-per-source. But in the docker model, you’re stuck with a stack.
My ideal image definition is a hybrid between docker’s immutable hash-addressed image layers and an fstab file to describe how and where to mount them all.
thristian|2 years ago
Unfortunately, those behaviours are mutually exclusive with transparent layering.
It's certainly possible to build a file-system whose behaviours are compatible with that kind of transparent layering - Plan9 was built on exactly that model, for example - but then it wouldn't be a POSIX-compliant filesystem anymore.
The promise of Docker was that you'd be able to deploy your existing applications in a more reliable, repeatable way, but that breaks down when you have to tinker with your application's file-handling code, or jump through extra hoops to flatten the layers of your container's filesystem image.
tottenhm|2 years ago
* The general idea of mixing together filesystems+folders to achieve re-use/sharing/caching.
* The "Dockerfile" approach to this - with its linear sequence of build-steps that map to a linear set of overlays (where each overlay depends on its predecessor).
The "Dockerfile" approach is pretty brilliant in a few ways. It's very learnable. You don't need to understand much in order to get some value. It's compatible many different distribution systems (apt-get, yum, npm, et al).
But although it's _compatible_ with many, I wouldn't say it's _particularly good_ for any one. Think of each distribution-system -- they all have a native cache mechanism and distribution infrastructure. For all of them, Dockerization makes the cache-efficacy worse. For decent caching, you have to apply some adhoc adaptations/compromises. (Your image-distribution infra also winds up as a duplicate of the underlying pkg-distribution infra.)
Here's an alternative that should do a better job of re-use/sharing/caching. It integrates the image-builder with the package-manager:
https://grahamc.com/blog/nix-and-layered-docker-images/
Of course, it trades-away the genericness of a "Dockerfile', and it no doubt required a lot of work to write. But if you compare it to the default behavior or to adhoc adaptations, this one should provide better cache-efficacy.
(All this is from POV of someone doing continuous-integration. If you're a downstream user who fetches 1-4 published image every year, then you're just downloading a big blob -- and the caching-layering stuff is kind of irrelevant.)
mitchellh|2 years ago
I disagree, I believe Docker /was/ revolutionary. And I feel like I see heavy technologists make this sort of dismissal based on technical points too soon. From a technical perspective, it was arguably evolutionary — a lot of people were poking at LXC and containerization a long time before Docker came around — but from a product perspective it was surely revolutionary.
I used to joke, in my own experience building a business in the DevOps space, that you’d spend 2 years building a globally distributed highly scalable complex piece of software, and no one would pay for it. Then you slap a GUI on it, and suddenly someone is willing to pay a million dollars for it. Now, that’s mostly tongue in cheek, but there is a kernel of truth to it.
The kernel of truth is that the technology itself isn’t valuable; it’s the /humanization/ of a technology, how it interfaces with the people who use it every day.
So what Docker did that was revolutionary was take a bunch of disparate pieces, glue them together, and put an incredible user experience on top of it so that that technology was now instantly available in minutes to just about anyone who cared.
At some point in the article, the author says it’s maybe something about a “workflow.” I’m… highly biased to say yes, absolutely. One of my core philosophies (that became the 1st point of the Tao of the company I helped start) is “workflows, not technologies.” When I talk about it, I mean it in a slightly different way, but it’s highly related: the workflow is super valuable for adoption, the technology is to a certain extent, but less so.
Technology enthusiasts (hey, I’m one of you!) usually hate to hear this. We all want to think you build the best thing or a revolutionary thing and then it just wins. That’s sometimes, but rarely, the case. You need that aspect, and you ALSO need timing to be right, the interface to be right, the explanation to be right, etc. Docker got this all right.
(Now, turning the above success into a business is a whole different can of worms, and like I said in the first paragraph, I don’t plan on commenting.)
For the author: I don’t mean any offense by this. I mostly agree with the other points of your post. The “FROM” being revolutionary I was nodding quite vigorously. Being able to “docker run ubuntu” was super magical, etc. I mostly wanted to point this because I see MANY technologists dismiss the excitement of technologies purely on the basis of technology over, and over, and over again, and the sad thing is its just one part of a much bigger package.
jcrawfordor|2 years ago
I'm not sure that we really disagree, but I wrote this sort of late and I also think I wasn't entirely clear. The point I was trying to make is that the "container runtime" part of Docker is a lot less important than the tooling they put around it, and they made Docker Hub a very core part of that broader ecosystem.
Dalewyn|2 years ago
Apple in a nutshell.
agumonkey|2 years ago
schappim|2 years ago
quickthrower2|2 years ago
I get that NPM packages are smaller than docker images typically.
wheresvic4|2 years ago
Either that or you vendor in your dependencies.
[1] https://smalldata.tech/blog/2023/03/17/setup-a-private-npm-r...
electroly|2 years ago
croes|2 years ago
G3rn0ti|2 years ago
You can run an in-house npm repository with that sold by Npm Inc.
I don’t know how sustainable it is but that’s probably one of their cash cows.
ericb|2 years ago
At current average SaaS revenue multiples (6.7), Docker is on the cusp of Unicorn status.
It's weird to read comments about "poor, sad, dying Docker" given how ridiculously successful Docker's Desktop licensing scheme is.
https://devclass.com/2023/03/24/docker-subscription-revenue-...
globalreset|2 years ago
pch00|2 years ago
Solaris containers didn't have anything like Docker Hub nor was setting them up as easy as "docker run".
(Posting as ex-Solaris guy)
berkle4455|2 years ago
ghshephard|2 years ago
About the only real negative of Nomad is that it doesn’t have the mindshare that k8s does, so you don’t see the amount of developer engagement in extending it the way you do in the k8s SIGs. Also, being an expert in Nomad doesn’t give you the same number of career opportunities, and on the other side - there aren’t umpteen thousand nomad SREs the way there are with k8s - so getting someone up to speed can take a couple months (but this system is very well defined, well documented, and small enough that any half talented engineer can master it very quickly)
Nomad does have the very important advantage that Hashicorp stands behind the product - so if anything goes awry, you’ve got a support team and escalation that will jump on and root cause/resolve any issue, usually within a matter of hours and even in the really squirrelly cases (that you are only likely to see when when you are managing many, many thousands of nodes in a cluster) within days.
emptysongglass|2 years ago
ngc248|2 years ago
wg0|2 years ago
You change your docker-compose, push and we detect via webhook and deploy. Logs, metrices everything from command line with bubble tee or something.
Most companies have brilliant engineers and shortsighted, incompetent out of touch product teams.
chrisbolt|2 years ago
So making a bad decision is bad, but admitting it was a bad decision and reversing it is also bad?
WirelessGigabit|2 years ago
If they would mandate a registry then many more people would host their own and take the load off of their system.
But no, they want to have it all. And yea, they can. That I don't care about.
But you can't make a change, and walk back from it, and expect people to be happy, given the story that came before all of this.
muyuu|2 years ago
Currently the situation is that a lot of people will think twice before generating any dependencies with free teams, or perhaps Docker altogether.
kelnos|2 years ago
There are also good and bad ways to apologize and change your mind. I don't have an opinion as to whether or not Docker's apology and reversal were done well, but I think it's fair that some could believe they weren't.
xdennis|2 years ago
zx8080|2 years ago
Probably, especially from those who think in the long term. Like those who builds things for themselves. They don't like to read news from Docker every day and keep in mind that their project images and even base images can just disappear overnight. It's too expensive for them to track docker decisions. It just takes resoures.
Making bad decisions is not bad at all. Losing trust is.
drowsspa|2 years ago
justsomehnguy|2 years ago
Not only that, but it was actively encouraged by all Docker fanbois to pull as soon as you can. When I saw Watchtower the first time I was just speechless.
Though IMO they had a chance at getting money long before that debacle: https://news.ycombinator.com/item?id=34377674
jacooper|2 years ago
But that achieved their goal too? They wanted to reduce loses from bandwidth costs, that works by either making the users pay, or use less bandwidth.
_boffin_|2 years ago
rtpg|2 years ago
Wild to me that Docker inc. Doesn’t just charge to pull prebuilt images. Just send docker files!
wmf|2 years ago
Yes, this is the default assumption.
Havoc|2 years ago
They had guaranteed cost (hosting & serving a bunch of heavy data) and no obvious monetization play available.
Kiro|2 years ago
How can the decision and reversal both be boneheaded?
KronisLV|2 years ago
> Docker Hub, though, may yet be Docker's undoing. I can only assume that Docker did not realize the situation they were getting into. Docker images are relatively large, and Docker Hub became so central to the use of Docker that it became common for DevOps toolchains to pull images to production nodes straight from Docker Hub. Bandwidth is relatively expensive even before cloud provider margins; the cost of operating Docker Hub must have become huge. Docker Inc.'s scaffolding for the Docker community suddenly became core infrastructure for endless cloud environments, and effectively a subsidy to Docker's many users.
I'm not sure why they couldn't have been a bit more aggressive about monetization from the start?
DockerHub could have been free for an X amount of storage, with image retention of Y days by default, with Z amount of traffic allowed per month. The Internet Archive tells me that they got this half right, with "unlimited public repos" being where things went wrong: http://web.archive.org/web/20200413232159/https:/hub.docker....
> The basics of Docker for every developer, including unlimited public repos and one private repo.
For all I care, Docker Desktop might have just offered a CLI solution with the tech to run it (Hyper-V or WSL2 back ends) for free, but charge extra for the GUI and additional features, like running Kubernetes workloads. BuildKit could have been touted as an enterprise offering with immense power for improving build times, at a monetary cost.
Perhaps it all was in the name of increasing adoption initially? In a sense, I guess they succeeded, due to how common containers are. It is easy to wonder about these things after the fact, but generally people get rather upset when you give them things for free and later try to take them away, or introduce annoyances. Even if you cave to the feedback and roll back any such initiatives, the damage is already done, at least to some degree.
I still remember a piece of software called Lens one day starting to mandate that users sign in with accounts, which wasn't previously necessary. The community reacted predictably: https://github.com/lensapp/lens/issues/5444 (they also introduced a subscription plan later: https://www.reddit.com/r/kubernetes/comments/wakkaj/lens_6_i...)
That said, I self host my own images in a Nexus instance and will probably keep using Docker as the tooling/environment, because for my personal stuff I don't have a reason to actually switch to anything else at the moment and Docker itself is good enough. Podman Desktop and Rancher Desktop both seem viable alternatives for GUI software, whereas for the actual runtimes and cloud image registries, there are other options, though remember that you get what you pay for.
MBCook|2 years ago
If it was paid, even a small amount, that’s a hurdle for people. Plus people avoiding it would have created more/stronger competing products as they had more incentive.
Get a ton of users then try to monetize later is a very common SV play for VC backed companies.
nemothekid|2 years ago
I'm not sure there is that much money in running a glorified specialized S3. Charging for disk space is terrible - the people who have money would probably just setup a private repo on S3 where it's cheaper and the people who don't aren't going to pay you.
For the amount of money they raised I don't think that would have been a convincing story to tell.
unknown|2 years ago
[deleted]
bobleeswagger|2 years ago
Docker as a company may be a joke, but I don't think the software will be nearly as nice to use without them. I think it's ridiculous that so many asshats are jumping on the hate Docker (the company) bandwagon without understanding how much they have been taken advantage of by the big players who can absolutely support them, but choose not to.
Sometimes I am so disappointed at how much ego still exists in tech. We're supposed to be more educated than the folks who came before us, yet we're doing a worse job.
folkrav|2 years ago
As much as I do not condone said big player's actions here, the whole system just doesn't reward "doing the right thing" as a general rule. If the licensing allowed it, they were in their right to do it. Even if they did do their right thing and support them, their competitor may not have. The morality element just doesn't have much weight the way things are.
wmf|2 years ago
mmtmn|2 years ago
[deleted]
krisknez|2 years ago
jaequery|2 years ago
hewlett|2 years ago