> Though it [Docker] does live on strongly within CI/CD ecosystems and, ostensibly, the inner loop of development thanks to the de facto standard Dockerfile.
Docker will still live on for both Windows and Mac developers. As a platform for running production code it might be dead or dying, but as an ecosystem and a development tool it will continue to live and probably thrive.
Docker is still the simplest way to install Elasticsearch, to make sure everyone on your team is using the same Java version, or run up a production equivalent environment on your laptop. Regardless of if you are on Linux, Mac or Windows.
I still love Docker and hope they end up finding a good business model so they can continue to live on.
To me this article conflates the removal of a piece of legacy hard-coding (dockershim) with the overall death of Docker.
That removal doesn't mean that Docker won't be used as part of Kubernetes clusters any more, Docker/Mirantis have committed to creating a CRI plugin for Docker.
But realistically Docker the product is primarily a developer tool and I don't see that going away. Docker for Windows/Mac is the easiest way to use containers, without having (mostly) to worry about low-level implementation details.
Whether that can be translated to a successful business model, is another question :)
> Docker for Windows/Mac is the easiest way to use containers
This is depressing.
Even more depressing is that macOS still doesn't seem to have native containers. (Correct me if I'm wrong - perhaps the sandboxing system can be used as a container system including virtual network interfaces attached to process groups?)
Docker images depend on Linux which means "Docker" on macOS runs in a Linux VM. Presumably docker for Windows could use WSL.
Author here. Thanks for your comment! I struggled with this as well. I agree that I might be conflating things a bit, but I did try to contextualize the slow but gradual move away from all things docker with the fate of the company, starting with the platform wars and resulting in the runtime deprecation. I think a lot of what happened came about because Docker took on Kubernetes with Swarm and lost.
That this was written by a (ex?) Red Hat employee makes a lot of sense. We burned significant time in the last 2 weeks because RHEL 8 intentionally makes it very hard to `dnf install docker-ce` (even though you are literally just installing the centos/rhel 7 package), and insteadm RH docs + staff tell IT admins who don't know better to `dnf install podman` as the only correct option and lie that is drop-in compatible. Then we get support tickets because, surprise, podman often doesn't actually replace docker due to bugs, or in areas like `docker-compose`, because it isn't even implemented.
Docker's second death is very much IBM/ RHEL's anti-competitive intent. An infrastructure provider should be neutral on how they present this kind of thing, especially a company selling enterprise reliability for running software on top. The 1984-esque double speak experience has been quite souring wrt ethics of doing business with them around cloud/container-era technology and trusting them for enterprise reliability.
I do find the podman ecosystem technically interesting... but due to the unethical corporate stewardship we're seeing, I'm uncomfortable seeing anyone use it.
The article and some of the comments are a strange take. There is a place for docker. There is a place for Kubernetes. They dont have much overlap in my mind.
If I want to run an elastic production system with many components, and scale each component independently, i'd use Kubernetes.
...But if I want to run a Jupyter+PyTorch stack easily w/o wasting half a day on CUDA library dependency issues, I would use Docker without Kubernetes -- because I dont want to go down the rabbithole of
1. Installing kubernetes on my laptop
2. Ingress Controller hell
2b. Ingress Controller route/path/url annotation hell to make something like Jupyter work in Kubernetes
Of course but you’re forgetting that Docker is a commercial organization as well. If not for large scale container orchestration in the cloud, how can they add value that’s worth paying considerable $ for?
Docker Inc's antagonistic attitude towards Red Hat and personal attacks on their engineers was the beginning of the end for them. The people who really headlined those attacks are now gone, but the damage was done. Docker made it clear from the beginning they had no desire to be part of a community and now the community is leaving them behind. Good riddance.
Docker Inc put themselves in an impossible situation because they wanted Docker to be a standard and also be a massively profitable monopoly but they didn't have enough moat to pull it off. Red Hat and Google saw the monopolization coming and aggressively commoditized Docker for the good of the community but also for their own benefit.
I don’t understand all the drama. I remember when everyone complained that docker was too monolithic and controlled by one company. In response they spun out a spec (OCI), an implementation of that spec (runc), then their entire freaking runtime (containerd). They focused on making Docker more of a developer tool with Docker for Mac and Windows.
Kubernetes continues to use OCI, runc and containerd - so basically the parts of Docker the kubernetes community asked for.
Yet here we are commenting on their demise as a business, blaming it on their “not being nice” and “not listening to their community”. It’s bullshit. We should be discussing how the longstanding tension over the role of Docker in the Kubernetes was finally resolved, in large parts through the successful efforts of the oci, runc and containerd projects which Docker started and shepherded to mass adoption.
Let’s take a step back and look at who is pushing this narrative about Docker, and how they benefit. I can’t help but notice whenever this drama pops up, a Red Hat employee is involved. Maybe it’s time for Red Hat to give up on old grudges and give credit where credit is due? Just a thought.
Docker may have failed as a business but it’s ridiculous to blame it on their lack of openness when their number one problem was being too open and trying too hard to get everyone to love them. Basically the opposite of what this blog post claims.
Fact: Docker is insanely popular with devs but Docker Inc. is struggling as a company. Devs have invested a lot on the platform and entire production systems and deployment pipelines use it.
k8s announces "we're no longer supporting docker shim!" and what most devs heard is "k8s is moving off from docker (which we know has been struggling for a while)! Fuuuuuccckkkk what do we need to do??" and panicked. This is the source of the drama, and we're going to see many "takes" on it. Its just the way the blogging ecosystem works.
Hey there. Author of the article here. I'm no longer a Red Hatter and my words are strictly my own.
I don't intend to push a narrative, rather my intent is to communicate what I experienced with Docker in an enterprise setting across time. A lot of this also comes from my time outside of Red Hat as well.
Even though Kubernetes won't use Docker inside anymore.., Docker is still a very good piece of software for easily running Linux images on Windows and Mac. Dockerfile is still the easiest way to build images (even though now there are alternatives to docker build such as buildah). docker-compose is still a very simple way of running containers locally if you don't need kubernetes (e.g. on my raspberrypi for running homeassistant, transmission and plex)...
but yeah.., Docker Inc. overestimated their importance and got what they deserve = Open Source tools wanting to distance from them.
It's worth noting that dockershim isn't actually getting removed at the moment, it's being deprecated (so a warning will pop up in kubelet logs) and there's no fixed date for actual removal.
At the same time Docker/Mirantis have committed to creating a CRI plugin for Docker, so it seems pretty likely that Docker is and will continue to be an option for Kubernetes clusters.
After they released local kubernetes support for the docker mac client, I've stopped using docker swarm completely. Although to be fair, I didn't use it all that frequently anyways.
If your language runtime supports testcontainers (https://www.testcontainers.org/) I would strongly suggest using that over docker compose. Docker compose is so lacking in features that I would just write bash scripts to setup/tear down container dependencies. But testcontainer removes the need to write bash scripts and provides some basic orchestration features that make it an absolute breeze to use.
Does it mean that developers working locally with
containers should ditch Docker and Dockerfiles altogether?
If so, what should they replace it with, esp, if all you want to deploy to prod is contained in a single docker-compose? Is switching local development to kubernetes a good idea in terms of performance, fast feedback loop, development experience (smooth flow), etc. ?
Author here. In the last part of my post, I do mention that Dockerfiles are probably the one thing that will outlast everything else from Docker. There's nothing really practical to replace that artifact at this time if you ask me.
There are some tools to convert your docker-compose into the k8s resource model. I'd advise against switching local development to Kubernetes. Docker found a sweet spot there, as heavyweight as it is today. I think where we'll land is Dockerfile + one of the other alternatives like kaniko, buildah, etc.
Really, there is none. After reading through so many of these types of threads, I'm convinced there is no sustainable business model for open source infrastructure.
The best bet would be to build proprietary paid services on top of the open source infrastructure. See Laravel's projects.
Docker, Inc. has the opportunity to be the NPM, Inc. for container images. How much did NPM sell to GitHub for?
Less concretely, is there a business model for a low-level piece of infrastructure like Docker? Is there a business model for ncurses or readline or df or ls?
"Real world" infrastructure is typically high capex, low margin. How many VC-funded startups build bridges or tunnels? SpaceX is pivoting to Starlink to find good margins and escape the fate of being a trucking company to space.
There is an interesting talk [0] by Philipp Krenn (Elastic) on
“Open Source as a Business. Strategy, struggle, success”. This talk takes the perspective of Elastic, the company behind the open source products Elasticsearch, Kibana, Beats, and Logstash, which makes its money with support, the commercial extensions, and cloud offerings. But we are also taking a look at how others are approaching this challenge, what worked, and what failed.
It depends on what that infrastructure is. Docker owes its existence to the OS not providing an adequate interface to create lighter containers than 00's style virtual machines. They chose to be imported and used as part of the stack by kubernetes instead of offering a better solution, which was clearly where kubernetes was headed. This one is less of a model question, than a market position question.
Elastic, on the other hand, is a very specialized application and the main issue they've had is having to compete with cloud providers (MongoDB also shares this problem). It's a very different problem, and both companies (Elastic and Mongo) seem to be finding ways to compete and cooperate. Elastic on Amazon is a great gateway drug to Elasticco's offerings.
Dev-tooling. I would have expected them to come up with something like https://www.testcontainers.org/ and sell enterprise/community licenses, just like they do with their cross platform clients. Enterprises need support, and by creating a rich suite of products that become essential to developers, they could have maybe really stuck (hard to predict what could happen).
Instead they tried really hard to make docker swarm a thing while kubernetes started taking off. To be sure, at the time, it wasn't clear which platform would succeed, and I remember that a certain OpenStack project tried to support both (like all OpenStack projects that try to be everything to everyone). Kubernetes has a lot of concepts that need to be learned, so the barrier to entry was much higher, and docker swarm seemed more straightforward.
All devs loved docker right away but in the early days I remember there being a lot of blogs about NOT running containers in production because of all the security issues. Kubernetes made that problem go away. It got adoption by different cloud providers which made it easy to deploy/use on their platform. That was maybe a tell: if devs loved docker so much and kubernetes was the tool that cloud providers supported, they could have focused on the former.
My own opinion: I think there is as a marketplace. Years ago when I started browserless.io, I wanted to find a way to sell access to the core image of ours. Docker kinda has/had a marketplace, where you can buy access to curated and secure images, but it didn’t get any support or news on it. Because of this we went the open-code route and just sold licenses. Too bad because I’d much rather have had a marketplace to do this instead.
I think what Heroku did right is really nail the Rails experience, and the Rails customer base. And then they expanded to multiple languages with the "Cedar" stack.
dotCloud tried to do every language from the get-go, and had a subpar experience for all of them, from what I understand.
Private repo host...because whatever they charge, it is cheaper than using an FTE to maintain it internally unless you want a flaky internal SLA.
I use it as a private repo host. The cost is a no-brainer. We'd probably pay 2x or 3x more compared to the value we're getting. The independence from the Cloud services (ECR, ACR, GCR) makes it a better option. Also, for now, there isnt any funny-math on multi-factor egress costs -- which become tricky to compute in real life.
We use it for containers to be run on k8s. As long as there are not super-low latency requirements for startup, I'd prefer an independent single private repo over multiple in-zone repos.
I am quite saddened by the article's tone: "technical debt of docker" ouch that's hurtful to Solomon and Sam...
I personally always take the view that docker, by all accounts, blew off a potential once in decade chance of becoming the next VMware, because of its own arrogance and incompetence.
But, I am very much disgusted by the community's misplaced hate towards docker, and the equally misguided euphemism towards the big corps, particularly Google. After all, the whole container community own it's creation to docker. One can always claim that container exits long before docker, but that's like claim there were always operating system before Unix, etc.
I don't think technical debt is an insult. Docker is a fully-fledged platform, 90% of which is not in use in any Kubernetes cluster. It's bloat. That Swarm and everything else was baked into the platform along with the runtime makes it technical debt for those that necessarily have to support it.
I don't think anyone hates docker. But there is an issue of hubris here.
I feel like I'm missing some key insight here. To me, the tl;dr of this post is: Kubernetes removed the need for Docker just to run container images in a k8s cluster, ergo, Docker is dead.
But Kubernetes is super complicated. The author seems to assume that everybody wants to run everything in Kubernetes, but if I want to run some backend on some server (or maybe on a few servers), then it feels like extreme overkill.
Now, I'm no guru in this field at all. In fact, one thing I always liked about Docker is it made you feel able to atomically deploy software without having to become an expert at anything first. But what's the current "don't have to be an expert" way to ship software if Docker is, supposedly, dead? Do we all have to learn Kubernetes?
I've never used Swarm but I've been told that one thing it had going for this is that it allowed you to do smallish setups pretty easily. If that's true, then I'm sad it lost the popularity war.
The wording seems so negative. You could also say Docker is evolving to something bigger. And that's a good thing. The simple approach to containers and the workflows that come along with them are nothing short of revolutionary if you ask me, and it will only get better.
Docker compose is now an open standard, just like Dockerfile, and it's wildly popular still so it's here to stay. Multiple tools besides Docker are starting to implement it (like podman). There is no need to go switching everything to Kubernetes.
From my understanding that was due to a long-running design philosophy clash between the systemd people thinking Docker should be using systemd primitives to manage things like unit startup/shutdown/etc., and the docker people wanting to use their in-house implementations so as not to depend on systemd (and thus rejecting PRs trying to change docker behavior to use systemd). I don't think it's fair to use that as an example of toxic behavior on either side, they each had their motivations, and a consensus needed to be reached for both projects to proceed. From what I can tell that debate seems to be old news these days and I haven't seen as much clashing between those teams. I am not a developer on either side though, this is just from the perspective of a user who follows the Github issues.
I just want to make absolute sure that this is NOT the case. Does this mean that docker images built with `docker build` and pushed up to dockerhub will no longer be useable in kubernetes or not?
[+] [-] hmottestad|5 years ago|reply
> Though it [Docker] does live on strongly within CI/CD ecosystems and, ostensibly, the inner loop of development thanks to the de facto standard Dockerfile.
Docker will still live on for both Windows and Mac developers. As a platform for running production code it might be dead or dying, but as an ecosystem and a development tool it will continue to live and probably thrive.
Docker is still the simplest way to install Elasticsearch, to make sure everyone on your team is using the same Java version, or run up a production equivalent environment on your laptop. Regardless of if you are on Linux, Mac or Windows.
I still love Docker and hope they end up finding a good business model so they can continue to live on.
[+] [-] raesene9|5 years ago|reply
That removal doesn't mean that Docker won't be used as part of Kubernetes clusters any more, Docker/Mirantis have committed to creating a CRI plugin for Docker.
But realistically Docker the product is primarily a developer tool and I don't see that going away. Docker for Windows/Mac is the easiest way to use containers, without having (mostly) to worry about low-level implementation details.
Whether that can be translated to a successful business model, is another question :)
[+] [-] musicale|5 years ago|reply
This is depressing.
Even more depressing is that macOS still doesn't seem to have native containers. (Correct me if I'm wrong - perhaps the sandboxing system can be used as a container system including virtual network interfaces attached to process groups?)
Docker images depend on Linux which means "Docker" on macOS runs in a Linux VM. Presumably docker for Windows could use WSL.
[+] [-] tariqislam_|5 years ago|reply
[+] [-] lmeyerov|5 years ago|reply
Docker's second death is very much IBM/ RHEL's anti-competitive intent. An infrastructure provider should be neutral on how they present this kind of thing, especially a company selling enterprise reliability for running software on top. The 1984-esque double speak experience has been quite souring wrt ethics of doing business with them around cloud/container-era technology and trusting them for enterprise reliability.
I do find the podman ecosystem technically interesting... but due to the unethical corporate stewardship we're seeing, I'm uncomfortable seeing anyone use it.
[+] [-] TuringNYC|5 years ago|reply
If I want to run an elastic production system with many components, and scale each component independently, i'd use Kubernetes.
...But if I want to run a Jupyter+PyTorch stack easily w/o wasting half a day on CUDA library dependency issues, I would use Docker without Kubernetes -- because I dont want to go down the rabbithole of
1. Installing kubernetes on my laptop
2. Ingress Controller hell
2b. Ingress Controller route/path/url annotation hell to make something like Jupyter work in Kubernetes
...when I can do that in 5 minutes with docker.
[+] [-] stingraycharles|5 years ago|reply
[+] [-] andrewguenther|5 years ago|reply
[+] [-] wmf|5 years ago|reply
Now the last battle is over Docker Hub.
[+] [-] benjaminwootton|5 years ago|reply
They threw me off the Docker Captains programme for saying that Rancher was a good product. Hilarious!
[+] [-] axaxs|5 years ago|reply
[+] [-] manigandham|5 years ago|reply
[+] [-] zapita|5 years ago|reply
Kubernetes continues to use OCI, runc and containerd - so basically the parts of Docker the kubernetes community asked for.
Yet here we are commenting on their demise as a business, blaming it on their “not being nice” and “not listening to their community”. It’s bullshit. We should be discussing how the longstanding tension over the role of Docker in the Kubernetes was finally resolved, in large parts through the successful efforts of the oci, runc and containerd projects which Docker started and shepherded to mass adoption.
Let’s take a step back and look at who is pushing this narrative about Docker, and how they benefit. I can’t help but notice whenever this drama pops up, a Red Hat employee is involved. Maybe it’s time for Red Hat to give up on old grudges and give credit where credit is due? Just a thought.
Docker may have failed as a business but it’s ridiculous to blame it on their lack of openness when their number one problem was being too open and trying too hard to get everyone to love them. Basically the opposite of what this blog post claims.
[+] [-] pm90|5 years ago|reply
Fact: Docker is insanely popular with devs but Docker Inc. is struggling as a company. Devs have invested a lot on the platform and entire production systems and deployment pipelines use it.
k8s announces "we're no longer supporting docker shim!" and what most devs heard is "k8s is moving off from docker (which we know has been struggling for a while)! Fuuuuuccckkkk what do we need to do??" and panicked. This is the source of the drama, and we're going to see many "takes" on it. Its just the way the blogging ecosystem works.
[+] [-] tariqislam_|5 years ago|reply
I don't intend to push a narrative, rather my intent is to communicate what I experienced with Docker in an enterprise setting across time. A lot of this also comes from my time outside of Red Hat as well.
[+] [-] black3r|5 years ago|reply
but yeah.., Docker Inc. overestimated their importance and got what they deserve = Open Source tools wanting to distance from them.
[+] [-] raesene9|5 years ago|reply
At the same time Docker/Mirantis have committed to creating a CRI plugin for Docker, so it seems pretty likely that Docker is and will continue to be an option for Kubernetes clusters.
[+] [-] pm90|5 years ago|reply
If your language runtime supports testcontainers (https://www.testcontainers.org/) I would strongly suggest using that over docker compose. Docker compose is so lacking in features that I would just write bash scripts to setup/tear down container dependencies. But testcontainer removes the need to write bash scripts and provides some basic orchestration features that make it an absolute breeze to use.
[+] [-] polskibus|5 years ago|reply
If so, what should they replace it with, esp, if all you want to deploy to prod is contained in a single docker-compose? Is switching local development to kubernetes a good idea in terms of performance, fast feedback loop, development experience (smooth flow), etc. ?
[+] [-] tariqislam_|5 years ago|reply
There are some tools to convert your docker-compose into the k8s resource model. I'd advise against switching local development to Kubernetes. Docker found a sweet spot there, as heavyweight as it is today. I think where we'll land is Dockerfile + one of the other alternatives like kaniko, buildah, etc.
[+] [-] zapita|5 years ago|reply
[+] [-] lima|5 years ago|reply
[+] [-] cercatrova|5 years ago|reply
The best bet would be to build proprietary paid services on top of the open source infrastructure. See Laravel's projects.
[+] [-] coward8675309|5 years ago|reply
Less concretely, is there a business model for a low-level piece of infrastructure like Docker? Is there a business model for ncurses or readline or df or ls?
"Real world" infrastructure is typically high capex, low margin. How many VC-funded startups build bridges or tunnels? SpaceX is pivoting to Starlink to find good margins and escape the fate of being a trucking company to space.
[+] [-] __jf__|5 years ago|reply
[0] https://media.ccc.de/v/froscon2019-2463-open_source_as_a_bus...
[+] [-] indymike|5 years ago|reply
Elastic, on the other hand, is a very specialized application and the main issue they've had is having to compete with cloud providers (MongoDB also shares this problem). It's a very different problem, and both companies (Elastic and Mongo) seem to be finding ways to compete and cooperate. Elastic on Amazon is a great gateway drug to Elasticco's offerings.
[+] [-] pm90|5 years ago|reply
Instead they tried really hard to make docker swarm a thing while kubernetes started taking off. To be sure, at the time, it wasn't clear which platform would succeed, and I remember that a certain OpenStack project tried to support both (like all OpenStack projects that try to be everything to everyone). Kubernetes has a lot of concepts that need to be learned, so the barrier to entry was much higher, and docker swarm seemed more straightforward.
All devs loved docker right away but in the early days I remember there being a lot of blogs about NOT running containers in production because of all the security issues. Kubernetes made that problem go away. It got adoption by different cloud providers which made it easy to deploy/use on their platform. That was maybe a tell: if devs loved docker so much and kubernetes was the tool that cloud providers supported, they could have focused on the former.
[+] [-] mrskitch|5 years ago|reply
[+] [-] chubot|5 years ago|reply
https://en.wikipedia.org/wiki/Docker,_Inc.
Heroku had a lot of revenue and was acquired by Salesforce. I just learned on Twitter that YC was in the red before the Heroku acquistion !
https://twitter.com/paulg/status/1334945195532685317
I think what Heroku did right is really nail the Rails experience, and the Rails customer base. And then they expanded to multiple languages with the "Cedar" stack.
dotCloud tried to do every language from the get-go, and had a subpar experience for all of them, from what I understand.
[+] [-] TuringNYC|5 years ago|reply
I use it as a private repo host. The cost is a no-brainer. We'd probably pay 2x or 3x more compared to the value we're getting. The independence from the Cloud services (ECR, ACR, GCR) makes it a better option. Also, for now, there isnt any funny-math on multi-factor egress costs -- which become tricky to compute in real life.
We use it for containers to be run on k8s. As long as there are not super-low latency requirements for startup, I'd prefer an independent single private repo over multiple in-zone repos.
[+] [-] dcolkitt|5 years ago|reply
[+] [-] z3t4|5 years ago|reply
[+] [-] justicezyx|5 years ago|reply
I personally always take the view that docker, by all accounts, blew off a potential once in decade chance of becoming the next VMware, because of its own arrogance and incompetence.
But, I am very much disgusted by the community's misplaced hate towards docker, and the equally misguided euphemism towards the big corps, particularly Google. After all, the whole container community own it's creation to docker. One can always claim that container exits long before docker, but that's like claim there were always operating system before Unix, etc.
[+] [-] chumboslice|5 years ago|reply
I don't think anyone hates docker. But there is an issue of hubris here.
[+] [-] skrebbel|5 years ago|reply
But Kubernetes is super complicated. The author seems to assume that everybody wants to run everything in Kubernetes, but if I want to run some backend on some server (or maybe on a few servers), then it feels like extreme overkill.
Now, I'm no guru in this field at all. In fact, one thing I always liked about Docker is it made you feel able to atomically deploy software without having to become an expert at anything first. But what's the current "don't have to be an expert" way to ship software if Docker is, supposedly, dead? Do we all have to learn Kubernetes?
I've never used Swarm but I've been told that one thing it had going for this is that it allowed you to do smallish setups pretty easily. If that's true, then I'm sad it lost the popularity war.
[+] [-] papito|5 years ago|reply
[+] [-] loriverkutya|5 years ago|reply
[+] [-] mmcnl|5 years ago|reply
[+] [-] beeskneecaps|5 years ago|reply
[+] [-] nikisweeting|5 years ago|reply
[+] [-] zaro|5 years ago|reply
[+] [-] nikisweeting|5 years ago|reply
[+] [-] jpswade|5 years ago|reply
[+] [-] nautilus12|5 years ago|reply
[+] [-] arpa|5 years ago|reply
[+] [-] dijit|5 years ago|reply
That’s `ctr`: https://github.com/projectatomic/containerd/blob/master/docs...
[+] [-] wmf|5 years ago|reply
[+] [-] unknown|5 years ago|reply
[deleted]
[+] [-] unknown|5 years ago|reply
[deleted]
[+] [-] baggy_trough|5 years ago|reply