3. Rate limits have nothing to do with the container runtime. Podman also has to get images from somewhere. And Cloudfront bills starting at $0.02/GB (assuming you pump 5PB+) have to be paid somehow. The rate limits were mostly in place to deny corporate CI users access to the Hub free of charge and force them to pay or deploy a mirror.
4. RedHat offers not only packages in RHEL but also support and it makes sense they will offer packaging and support only for podman (a RH project) going forward. This does not concern us who don't pay for RH support.
Having said that, Podman is a nice evolution of Docker. Though I am not sure how much I can trust the rest of the article given how the intro twisted so many facts.
> Instead of free use of Docker Desktop until now, this software suite is now available for rent after the transition phase until the end of January 2022, starting at $5 per user/month, provided it is for professional use.
> Here, Docker Desktop includes the Docker Engine, docker-cli, docker-compose and a credential helper, among others.
At least docker-compose (and probably also docker service + cli, since it is included in Debian) is FOSS. While they might be included in Docker Desktop, they are certainly available separately, so paying for the licence is in no way obligatory when using docker.
I wrote a comment for the general comment section, but wanted to respond to your comment as it contains valid criticism.
1. Mirantis did not acquire Docker Inc., they only bought Docker Enterprise...
>> You are right. That's a mistake.
2. k8s did'nt remove dockershim for political reason..
>> That is a valid point and the official story. Imho I think the acquisition was nevertheless something that played an accelerating role in this, since it happened relatively soon after the acquisition. Mirantis acquired Docker Enterprise in November 2019, and the end of Dockershim support was announced in 2020. I've heard that from a few other people as well. BUT this is just rumor, so you might be right.
3. Rate limits have nothing to do with the container runtime..
>> This is 100% true. Nevertheless, dockerhub is part of Docker and therefore a rate limit on the official Docker registry is something that has made our customers switch registry to other registry providers or implement their own container registry. Therefore, they are getting rid of this service, which is part of the Docker-only ecosystem, so its usage in enterprises is decreasing.
4. RedHat support switched to Podman as it is on of their products..
>> It only makes sense for RedHat to support Podman since it's from their own product forge. You are right about that. That said, there are a lot of companies using RH and paying for it, which automatically leads to a decrease in Docker usage vs Podman. Less use of Docker means more use of Podman.
Last but not least, RedHat would not invent Podman if there was no need for an independent tool to Docker. Podman helps in some areas where Docker lacks features, such as support for pods, rootless mode, etc.
Thanks for your criticism! It is appreciated and helps us to do better.
> The rate limits were mostly in place to deny corporate CI users access to the Hub free of charge and force them to pay or deploy a mirror.
What I never understood is why they didn’t just properly handle this with mirrors like any package manager does; why is this a problem for docker, but not for yum / apt / etc?
I have to admit that these rate limits have accelerated my migration to alternatives like quay.io
Rootless podman is my first choice for using containers now, it works fantastically well in my experience. It's so much nicer to have all my container related stuff like volumes, configs, the control socket, etc. in my home directory and standard user paths vs. scattered all over the system. Permission issues with bind mounts just totally disappear when you go rootless. It's so much easier and better than the root privileged daemon.
I really wish rootless podman/docker was the default install now. It's still kind of annoying to setup with reading a smattering of old docs and having to think about your distro setup, cgroups settings, etc. It really should just be a "run this install script and you're done".
The problem with rootless is that you don't get a native network stack since setting up bridges and veth devices still requires some elevated capabilities. But instead of running full root this could be outsourced to a helper executable with some caps set (a narrower version of suid).
> Permission issues with bind mounts just totally disappear when you go rootless.
Recent kernel versions have gained uid mapping capabilities on mounts. Hopefully future docker will make use of it. Then we can run entire containers as different users.
Are you saying that all files from your containers are owned by you as user? If so I will start investigating right now. It is so super annoying to download something with nzbget for example and then having to go through sudo to get to your downloaded files. It is indeed my major gripe with my docker compose setup atm.
Or just messing with a html file in the nginx docker bind mount, ugh!
The arch wiki is my go to every time I install Podman, and it’s a little easier every time. It’s down to like two steps now, with no file editing. We’ll get there.
If you are on Linux, there is the fantastic podman option "--userns keep-id" which will make sure the uid inside the container is the same as your current user uid.
> Permission issues with bind mounts just totally disappear when you go rootless.
I have a problem with mounting a named foo in a container (at /foo) and bindfsing the underlying directory of that volume on ${HOME}/foo with create-for parameters so that when the host user touch files in it they are owned by host 1000:1000 but inside the container it's owned by 33:33.
Volume foo really contains only a unix socket. This unix socket is shared between the host and the container for xdebug communication.
So, this doesn't work, the container process can't write/read the socket even though it can manipulate other files in the mounted volumes /foo and they appear as owned by 1000:1000 on the host and vice versa.
But if I mount the volume directly like that: ${HOME}/foo:/foo then it works and the container can write to the socket and the host and the container can communicate both ways.
Would rootless podman allow me to use a named volume ? Why doesn't it work like I think it should, is it because the unix socket lives in the kernel 'or something' ? Maybe it's a question for SO.
What about UID issues? I remember using it years ago and sometimes having permission issues in containers when mounting local files. How is that nowadays?
I much prefer running this in a rootless manner also. What about docker compose? Is there an alternative for podman?
Since Podman 4.1 came out with full Compose 2.x compatibility, I'm running Podman on Docker's socket, but using Docker's CLI to talk to it, so that I can use the buildx and compose CLI plugins. It works great, Docker's CLI doesn't seem to have any clue that it's talking to not-Docker. I even have VSCode's Docker extension and Remote Containers working this way.
Is the compose support recent? Tried earlier this year, and it was not nearly there. And I use docker compose stuff as remote interpreter in IntelliJ/Pycharm stuff, that didn't work well with podman when I tested.
I don't really care what I use, I just want to be able to develop locally without spending days setting stuff up. Rootless or whatever means nothing to me. Docker compose have made that easy for lots of otherwise complicated projects. Just compose up, point editor at the image and ready to go.
On my system I have a very small `docker` script that selectively calls either `podman` or `buildah bud` depending on the first arg. The CLIs are completely compatible.
Think it is rapidly moving towards being more of a data carrier/format rather than being dead per se.
Half the time you're jamming it into some cloud service anyway where you have no idea what GCP/fly/aws is using under the hood to actually run it.
Meaning this discussion is more relevant to the self-hosted context. In which case I'd say containerization isn't really security. So in my mind that residual risk of the daemon being root is inconsequential. (Or if not use a VM).
I'm still in the VM all the things camp. Like, containers are neat but VMs have the same cheapness for me - that is deploy some VM per app. Like Docker per app. Many times these days I'm one VM for just one Docker package. (Can you tell VM is my favorite isolation method)
Just to be clear, privileged containers with CAP_SYS_ADMIN do have additional privileges outside of normal 'containerized' workloads, just having it in a container does not mean that the security side affects are inconsequential.
Kind of a weird take, to be honest. If containerization is not security, them not running as root should be an absolutely critical first step for managing risk.
Podman has a full Docker compatible API, so you just have to enable it, and then set the DOCKER_HOST to point to its socket. From there docker compose should work as if you had Docker.
Podman also is currently working on "podman machine", which can spin up a Linux VM to run Podman on macOS and Windows. I think it's still in beta or something, but it seems to be working already.
There is also things like Podman Desktop[0] and Podman Desktop Companion[1] which attempt to bring an experience similar to Docker Desktop to Podman.
> Podman also runs on Mac and Windows, where it provides a native podman CLI and embeds a guest Linux system to launch your containers. This guest is referred to as a Podman machine and is managed with the `podman machine` command.
> ..On Mac, each Podman machine is backed by a QEMU based virtual machine.
> ..On Windows, each Podman machine is backed by a virtualized Windows System for Linux (WSLv2) distribution.
I only use Podman for my workloads these days. Docker was always a headache for me on Linux. Podman allows me to quickly do whatever I want with containers and I can use systemd or a simple bash script to easily create services on my workstation or in production with Nomad with https://github.com/hashicorp/nomad-driver-podman
I am super thankful for the team of developers that work on Podman. It has really come a long way since 2.0 and they are very responsive to issues in my experiences. If you are using Linux as your daily driver and you use Containers give Podman a try. Here are some examples of the things I have done with Podman.
It's okay to stick with Docker if it works for us right? There's nothing fundamentally wrong with it right?
At the moment Podman is just more work for us because I and other devs don't have years of of experience and intuitions about Podman like we have with Docker. I'd rather just focus on business problems rather than another migration.
Similar feelings here. We use Docker without any issues beyond usual problems of a caliber that I guess any other tool would have.
I see some new live in docker desktop and I also have flawless experience on M! Mac with it. I even ignored all recent hype to ditch Docker because it became more transparent tool to my workflow, I forgot I use it.
That being said, docker blows. Docker desktop blows more. Docker desktop on Windows blows the most. I always get stuck with a bind mount misbehaving, or some other issue that requires me to wipe the docker desktop data to fix it. Just use docker compose for simple stuff, and stay away from docker for Windows if possible.
On the Docker for Windows note, it's not podman but after I had trouble with their "forced upgrade unless enterprise" policy (which made me update to a broken version with a known issue they didn't solve for weeks) I switched to Rancher Desktop and never looked back.
You get all the things (docker CLI, docker-compose, kubernetes via k3s) but it's FOSS and it doesn't feel like they're trying to shove a premium plan down your throat.
Given that docker modelled the OCI standard on their own software and expected everyone else to follow along, I think it would be nigh impossible for them to not be in compliance with the spec.
> blows. Docker desktop blows more. Docker desktop on Windows blows the most.
I see your "Docker blows on Windows" and raise you "Docker blows on M1." At least you have WSL, Apple couldn't give half a fuck (and nor could Docker). Colima has been a lifesaver, but having used WSL Docker in the past and Linux Docker recently (I now favor Podman), M1 Docker is a complete shitshow. I'm slowly infesting our codebase with Podman/Buildah/Skopeo (also because they do things Docker can't), and hopefully we'll get to the point of using podman machine.
On Linux podman is significantly different from docker: It uses user namespaces. So it is much more secure (assuming that security bugs related to Linux user namespaces, which have indeed been dicovered, are still much rarer than people running untrusted container images). However, with security comes incompatibility. If the image does tricky system interaction instead of just running user space code and some standard cases like opening a socket chances are that things will not work under podman without modification.
What exactly does Docker for Desktop do? All it seems to me is a slightly annoying non-free GUI application I need to use docker cli from Windows or Mac.
docker is far far far far from dead. I frankly do not seeing it going away for a looong time because it is generally a great and well supported product. Theres lots of information on issues and fixes and tons of developers using in across many large corporations
Hello, its Maximilian from fme - the author of the linked article. I had vacation when this article blew up and now i am pretty overwhelmed of the discussion that started here!
We are very happy that this discussion took place and that it sparked a nice and lively discussion about containers, Docker, Podman and all the other stuff between you and all the other professionals. We are also very grateful for all your criticism and corrections. The article is currently under review and we want to make sure to correct the previous mistakes in a transparent way. Seems like a few valid points were missed and some wrong assumptions were made. Nonetheless, I think the discussion that came out of this helped a lot of people get some useful insights into Docker, Podman, and containerization as a whole.
We are trying to get all wrong information - based on your discussion - right and summarize them in the refactored article!
What can we take away from this? - More research next time is necessary and we need to challenge our article.
Again, thanks a lot for all your feedback! We appreciate it very much! There were a few new things that also I was able to learn in this process which is always nice.
> Podman currently only runs stably on Linux-based systems. Under Windows or MacOS it becomes a bit more demanding, although it is possible with detours.
I think this is an area bearing improvement before most dev workflows can switch
NixOS recently switched the default from docker to podman for containers. I don't know if it's Nix's configuration, or a problem with Podman itself, but it's unusable. The systemd service often hangs, creating networks errors out, the pod concept doesn't work great, requiring tearing down the whole group of containers to change port mappings, and it acts like a completely different system depending on the user running the container (though I suppose this last bit is supposedly a feature). Why isn't the list of images global to the machine? Perhaps I just don't get it yet.
> Why isn't the list of images global to the machine?
Why would it be? Unless you jump through hoops, your music and photos aren't global to the machine; rootless containers just made containers fit in the traditional unix security model.
It's also happening on Debian. Podman seems to be also losing quite often network firewall configuration after killing containers that were using cached images so you are not able to access internet from containers (logs say its libpodman bug)
Podman always feel like software no one’s using in prod. There’s a lot of edge cases and bugs - particularly in the service control, user groups and ipv6 support
People complaint about framework or library churn in the JS world, but other ecosystems have the same issue. For people who aren't following this ecosystem closely, I just want something that works and gets out of the way with a minimal learning curve.
Navigating fragmented ecosystems sure is a pain... I get that different tools serve different needs, but having to learn a bunch of different things just to figure out which one to use gets really exhausting. Especially when you have to do it for tools at every layer of your stack.
JS is a punching bag because it’s a moving target that most everyone has to deal with at some point and pre-ES6 JS had quite a few warts. It’s a fine language now, especially with TypeScript helping to push things in the right direction.
I left .NET land for this very reason. JS pales in comparison to .NET framework fatigue
Personally, i still find Docker to be the easiest way to get containers up and running - everything from Dockerfiles, building images (caching aside), to running them with Docker Compose, Docker Swarm or even Kubernetes with Docker as the runtime.
Why?
Docker - one of the older and most popular runtimes for OCI, with all of the tooling you might possibly want; most of the problems are known and solutions are easy to find, vs venturing "off the happy path" (not everyone has the resources to try and figure out Podman compatibility oddness)
Docker Compose - ubiquitous and perhaps the easiest way to launch a certain amount of containers on a host, be it a local development machine, or a remote server (in a single node deployment), none of the complexity of Kubernetes, no need for multiple docker run commands either
Docker Swarm - most projects out there do not need Kubernetes; personally, i'm too poor to pay for a managed control plane or host my own for a cluster; K3s and k0s are promising alternatives, but Docker Swarm also uses the Compose specification which is far easier to work with and most of the times you can effortlessly setup a docker-compose.yml based stack, to run on multiple nodes, as needed; also, in contrast to Nomad, it comes out of the box, if you have Docker installed; also, when you don't want to mess around with a Kubernetes ingress and somehow feeding certificates into it, you can instead just run your own Apache/Nginx/Caddy instance and manage it like a regular container with host ports 80/443 (setting up which might be a bit more difficult with Kubernetes, because by default you get access to ports upwards of 30000)
Kubernetes with Docker as the runtime - maybe with something like K3s, if you need relatively lightweight Kubernetes but also want to figure out what is going on with individual containers through the Docker CLI which is familiar and easy to work with, to dig down vs what something like containerd would let you do
Long story short, choose whatever is the best suited solution for your own needs and projects. Just want things to work and be pretty simple, modern technologies, hyperscalability and ecosystem be damned? Docker/Compose/Swarm. Want something with a bit more security and possibly even for running untrusted containers, with lots of scalability and projects built around the technologies? Podman/containerd/Kubernetes.
I've heard about Docker and Swarm being dead for years, yet it seems to work just fine. They even fixed the DNS weirdness on RPM distros (RHEL/Oracle Linux) in the 20.X releases i think, though personally i'm more inclined towards using the second-latest Ubuntu LTS because there's far less SELinux or other weirdness to be had (e.g. K3s clusters failing to initialize because of changes to cgroups). When it will actually die for real, i'll just use something like https://kompose.io/ to migrate over from the Compose format to Kubernetes.
Of course, none of that excuses you from having to learn Kubernetes, because that's what the industry has decided on. My approach is more akin to basing a new project on PHP 7 because you know that you don't need anything more.
On a different note, your employers asking you to setup Kubernetes and to launch Nexus, PostgreSQL and whatever else on a single node that has 8 GB of RAM, as well as run a bunch of Java services on it can be challenging to say the least, especially when the cloud is not in the cards, there are no pre-existing clusters in the org, there isn't the interest to get more resources and even if there was, then there'd also be thoughts along the lines of "why should we give this one project that many resources?" expressed. I'm slightly exaggerating, but oftentimes it can be akin to choosing to run Apache Kafka when RabbitMQ would have sufficed - someone else making the choice for you, pushing you into sub-optimal conditions and making you suffer as a result.
I recently went to Europe DevDays 2022 (https://devdays.lt/) and DevOps Pro Europe 2022 (https://devopspro.lt/) and one of the arguments expressed was along the lines of: "You should never host your own clusters, if you can. Just pay one of the big three platforms out there (AWS/GCP/Azure) to do it for you." What a crazy time to be alive, where running the full stack can be problematic and enterprise solutions are getting more and more detached from what smaller deployments and homelabs would actually need.
That said, Podman is getting closer and closer to full feature parity with Docker with every passing year and Kubernetes is also easier to run thanks to clusters like K3s/k0s/RKE and tools like Lens/k9s/Portainer/Rancher.
Article itself and the idea in general seems controversial at best for me.
I could not find answer for myself why Docker is any close to be dead and why Podman is the thing I should use instead of Docker immediately.
Q 1. Docker has some policy change and your company may need to pay for it - if you have > 250 persons/10 million revenue
A 1: Indi/Solo devs out of scope. Enterprises probably fine with that anyways.
Q 2. Docker has limits for pulls from Docker hub!!!! You have 100 (200 with login) downloads/single IP for 6 hours interval.
A 2: It was already mentioned, switching to Podman, while using Dockerhub doesn't magically helps. Moreover, practically I find it totally fine for Indi/Solo dev. For companies, who's amount of pulls can be higher - you want and have in place your local registry anyways to ensure Business Continuity and this doesn't bother you much.
Q 3. Running no background processes, running rootless is good because of ...
A 3: On dev env (your local laptop, for example) you do not care much - your goal is ease of use. On production, running rootless rises question from me:
* how you expect firewall (iptables) to be updated for port forwardings?
* how you expect networks and bridges organized without root?
* how you expect auto restart for container to happen on failure without supervising it?
* some security advises and mitigation guides mention disabling user namespaces and was/is disabled by default in some distros https://news.ycombinator.com/item?id=28054823 - your security & system administration team may have such limits in place on production
* those who care for intruder gets into container and can hijack system further use FireCracker or similar approach anyways [for production]
So what is left in "pros" for Podman, have I missed anything?
BTW there's no alternative to docker compose in podman, I've tried running my docker compose file with podman compose and it just failed outright. Currently I'm going to stick with Docker because of this alone.
How does podman work with giving native access to resources? Like on a raspberry pi with docker you need to tell docker it has the ability to access /sys for the GPIO pins and for I2C etc. does podman have this too?
The first thing that happens when visiting that site is a modal overlay that forces me to interact with it before I can do anything else, so that website is what's dead, at least to me.
smarx007|3 years ago
1. Mirantis did not acquire Docker Inc., they only bought Docker Enterprise. See https://techcrunch.com/2019/11/13/mirantis-acquires-docker-e... and https://www.docker.com/blog/docker-enterprise-edition/
2. k8s didn't remove dockershim for political reasons but because containerd was refactored out of Docker long time ago and k8s wanted to get rid of the extra layer. See https://kubernetes.io/blog/2022/01/07/kubernetes-is-moving-o...
3. Rate limits have nothing to do with the container runtime. Podman also has to get images from somewhere. And Cloudfront bills starting at $0.02/GB (assuming you pump 5PB+) have to be paid somehow. The rate limits were mostly in place to deny corporate CI users access to the Hub free of charge and force them to pay or deploy a mirror.
4. RedHat offers not only packages in RHEL but also support and it makes sense they will offer packaging and support only for podman (a RH project) going forward. This does not concern us who don't pay for RH support.
Having said that, Podman is a nice evolution of Docker. Though I am not sure how much I can trust the rest of the article given how the intro twisted so many facts.
bornfreddy|3 years ago
> Instead of free use of Docker Desktop until now, this software suite is now available for rent after the transition phase until the end of January 2022, starting at $5 per user/month, provided it is for professional use.
> Here, Docker Desktop includes the Docker Engine, docker-cli, docker-compose and a credential helper, among others.
At least docker-compose (and probably also docker service + cli, since it is included in Debian) is FOSS. While they might be included in Docker Desktop, they are certainly available separately, so paying for the licence is in no way obligatory when using docker.
hiroshui|3 years ago
1. Mirantis did not acquire Docker Inc., they only bought Docker Enterprise...
>> You are right. That's a mistake.
2. k8s did'nt remove dockershim for political reason..
>> That is a valid point and the official story. Imho I think the acquisition was nevertheless something that played an accelerating role in this, since it happened relatively soon after the acquisition. Mirantis acquired Docker Enterprise in November 2019, and the end of Dockershim support was announced in 2020. I've heard that from a few other people as well. BUT this is just rumor, so you might be right.
3. Rate limits have nothing to do with the container runtime..
>> This is 100% true. Nevertheless, dockerhub is part of Docker and therefore a rate limit on the official Docker registry is something that has made our customers switch registry to other registry providers or implement their own container registry. Therefore, they are getting rid of this service, which is part of the Docker-only ecosystem, so its usage in enterprises is decreasing.
4. RedHat support switched to Podman as it is on of their products..
>> It only makes sense for RedHat to support Podman since it's from their own product forge. You are right about that. That said, there are a lot of companies using RH and paying for it, which automatically leads to a decrease in Docker usage vs Podman. Less use of Docker means more use of Podman. Last but not least, RedHat would not invent Podman if there was no need for an independent tool to Docker. Podman helps in some areas where Docker lacks features, such as support for pods, rootless mode, etc.
Thanks for your criticism! It is appreciated and helps us to do better.
stingraycharles|3 years ago
What I never understood is why they didn’t just properly handle this with mirrors like any package manager does; why is this a problem for docker, but not for yum / apt / etc?
I have to admit that these rate limits have accelerated my migration to alternatives like quay.io
tyingq|3 years ago
"Docker the company is having trouble monetizing their products...so I'm unsure about their future"
And, so the follow on of:
"Can I use compatible tools that don't depend on Docker, the company, as much?"
Makes some sense.
vocram|3 years ago
Another wrong thing. Podman directly controls the runtime (crun or runC). It does not talk with containerd like Docker.
qbasic_forever|3 years ago
I really wish rootless podman/docker was the default install now. It's still kind of annoying to setup with reading a smattering of old docs and having to think about your distro setup, cgroups settings, etc. It really should just be a "run this install script and you're done".
the8472|3 years ago
> Permission issues with bind mounts just totally disappear when you go rootless.
Recent kernel versions have gained uid mapping capabilities on mounts. Hopefully future docker will make use of it. Then we can run entire containers as different users.
teekert|3 years ago
Or just messing with a html file in the nginx docker bind mount, ugh!
If podman solves that I’m going all in tomorrow.
pkulak|3 years ago
forty|3 years ago
johnchristopher|3 years ago
I have a problem with mounting a named foo in a container (at /foo) and bindfsing the underlying directory of that volume on ${HOME}/foo with create-for parameters so that when the host user touch files in it they are owned by host 1000:1000 but inside the container it's owned by 33:33.
Volume foo really contains only a unix socket. This unix socket is shared between the host and the container for xdebug communication.
So, this doesn't work, the container process can't write/read the socket even though it can manipulate other files in the mounted volumes /foo and they appear as owned by 1000:1000 on the host and vice versa.
But if I mount the volume directly like that: ${HOME}/foo:/foo then it works and the container can write to the socket and the host and the container can communicate both ways.
Would rootless podman allow me to use a named volume ? Why doesn't it work like I think it should, is it because the unix socket lives in the kernel 'or something' ? Maybe it's a question for SO.
sureglymop|3 years ago
solarkraft|3 years ago
jordemort|3 years ago
Since Podman 4.1 came out with full Compose 2.x compatibility, I'm running Podman on Docker's socket, but using Docker's CLI to talk to it, so that I can use the buildx and compose CLI plugins. It works great, Docker's CLI doesn't seem to have any clue that it's talking to not-Docker. I even have VSCode's Docker extension and Remote Containers working this way.
matsemann|3 years ago
I don't really care what I use, I just want to be able to develop locally without spending days setting stuff up. Rootless or whatever means nothing to me. Docker compose have made that easy for lots of otherwise complicated projects. Just compose up, point editor at the image and ready to go.
unknown|3 years ago
[deleted]
zamalek|3 years ago
jcastro|3 years ago
gigatexal|3 years ago
Havoc|3 years ago
Half the time you're jamming it into some cloud service anyway where you have no idea what GCP/fly/aws is using under the hood to actually run it.
Meaning this discussion is more relevant to the self-hosted context. In which case I'd say containerization isn't really security. So in my mind that residual risk of the daemon being root is inconsequential. (Or if not use a VM).
pjmlp|3 years ago
Personally, after dealing with Kubernetes yaml spaghetti, I rather deal with VM images, but unfortunately I don't get to dictate IT fashion.
djbusby|3 years ago
ungamedplayer|3 years ago
krageon|3 years ago
lapser|3 years ago
Podman has a full Docker compatible API, so you just have to enable it, and then set the DOCKER_HOST to point to its socket. From there docker compose should work as if you had Docker.
Podman also is currently working on "podman machine", which can spin up a Linux VM to run Podman on macOS and Windows. I think it's still in beta or something, but it seems to be working already.
There is also things like Podman Desktop[0] and Podman Desktop Companion[1] which attempt to bring an experience similar to Docker Desktop to Podman.
[0] https://podman-desktop.io/
[1] https://iongion.github.io/podman-desktop-companion/
lioeters|3 years ago
> Podman also runs on Mac and Windows, where it provides a native podman CLI and embeds a guest Linux system to launch your containers. This guest is referred to as a Podman machine and is managed with the `podman machine` command.
> ..On Mac, each Podman machine is backed by a QEMU based virtual machine.
> ..On Windows, each Podman machine is backed by a virtualized Windows System for Linux (WSLv2) distribution.
https://podman.io/getting-started/installation.html
jdoss|3 years ago
I am super thankful for the team of developers that work on Podman. It has really come a long way since 2.0 and they are very responsive to issues in my experiences. If you are using Linux as your daily driver and you use Containers give Podman a try. Here are some examples of the things I have done with Podman.
https://github.com/forem/selfhost
https://github.com/jdoss/ppngx
https://gist.github.com/jdoss/25f9dac0a616e524f8794a89b7989e...
https://gist.github.com/jdoss/ad87375b776178e9031685b71dbe37...
mekster|3 years ago
iknownothow|3 years ago
At the moment Podman is just more work for us because I and other devs don't have years of of experience and intuitions about Podman like we have with Docker. I'd rather just focus on business problems rather than another migration.
raesene9|3 years ago
jarek83|3 years ago
I see some new live in docker desktop and I also have flawless experience on M! Mac with it. I even ignored all recent hype to ditch Docker because it became more transparent tool to my workflow, I forgot I use it.
zamalek|3 years ago
tragictrash|3 years ago
https://opencontainers.org/
That being said, docker blows. Docker desktop blows more. Docker desktop on Windows blows the most. I always get stuck with a bind mount misbehaving, or some other issue that requires me to wipe the docker desktop data to fix it. Just use docker compose for simple stuff, and stay away from docker for Windows if possible.
Hamcha|3 years ago
You get all the things (docker CLI, docker-compose, kubernetes via k3s) but it's FOSS and it doesn't feel like they're trying to shove a premium plan down your throat.
Grimburger|3 years ago
zamalek|3 years ago
I see your "Docker blows on Windows" and raise you "Docker blows on M1." At least you have WSL, Apple couldn't give half a fuck (and nor could Docker). Colima has been a lifesaver, but having used WSL Docker in the past and Linux Docker recently (I now favor Podman), M1 Docker is a complete shitshow. I'm slowly infesting our codebase with Podman/Buildah/Skopeo (also because they do things Docker can't), and hopefully we'll get to the point of using podman machine.
usr1106|3 years ago
On Linux podman is significantly different from docker: It uses user namespaces. So it is much more secure (assuming that security bugs related to Linux user namespaces, which have indeed been dicovered, are still much rarer than people running untrusted container images). However, with security comes incompatibility. If the image does tricky system interaction instead of just running user space code and some standard cases like opening a socket chances are that things will not work under podman without modification.
pid-1|3 years ago
I think Docker folks are heavily focused on providing great local development experiences, which is a niche few other products are covering.
teruakohatu|3 years ago
papito|3 years ago
https://github.com/bcicen/ctop
bobobob420|3 years ago
redisman|3 years ago
hiroshui|3 years ago
We are very happy that this discussion took place and that it sparked a nice and lively discussion about containers, Docker, Podman and all the other stuff between you and all the other professionals. We are also very grateful for all your criticism and corrections. The article is currently under review and we want to make sure to correct the previous mistakes in a transparent way. Seems like a few valid points were missed and some wrong assumptions were made. Nonetheless, I think the discussion that came out of this helped a lot of people get some useful insights into Docker, Podman, and containerization as a whole.
We are trying to get all wrong information - based on your discussion - right and summarize them in the refactored article!
What can we take away from this? - More research next time is necessary and we need to challenge our article.
Again, thanks a lot for all your feedback! We appreciate it very much! There were a few new things that also I was able to learn in this process which is always nice.
hiroshui|3 years ago
Have a good one!
bdcravens|3 years ago
I think this is an area bearing improvement before most dev workflows can switch
mekster|3 years ago
pelorat|3 years ago
j4hdufd8|3 years ago
cpach|3 years ago
hamilyon2|3 years ago
mati365|3 years ago
jdoss|3 years ago
colordrops|3 years ago
yjftsjthsd-h|3 years ago
Why would it be? Unless you jump through hoops, your music and photos aren't global to the machine; rootless containers just made containers fit in the traditional unix security model.
mati365|3 years ago
bob778|3 years ago
TheAceOfHearts|3 years ago
Navigating fragmented ecosystems sure is a pain... I get that different tools serve different needs, but having to learn a bunch of different things just to figure out which one to use gets really exhausting. Especially when you have to do it for tools at every layer of your stack.
olingern|3 years ago
I left .NET land for this very reason. JS pales in comparison to .NET framework fatigue
studmuffin650|3 years ago
ravenstine|3 years ago
https://github.com/lima-vm/lima
If memory serves me right, it works with Podman.
KronisLV|3 years ago
Why?
Docker - one of the older and most popular runtimes for OCI, with all of the tooling you might possibly want; most of the problems are known and solutions are easy to find, vs venturing "off the happy path" (not everyone has the resources to try and figure out Podman compatibility oddness)
Docker Compose - ubiquitous and perhaps the easiest way to launch a certain amount of containers on a host, be it a local development machine, or a remote server (in a single node deployment), none of the complexity of Kubernetes, no need for multiple docker run commands either
Docker Swarm - most projects out there do not need Kubernetes; personally, i'm too poor to pay for a managed control plane or host my own for a cluster; K3s and k0s are promising alternatives, but Docker Swarm also uses the Compose specification which is far easier to work with and most of the times you can effortlessly setup a docker-compose.yml based stack, to run on multiple nodes, as needed; also, in contrast to Nomad, it comes out of the box, if you have Docker installed; also, when you don't want to mess around with a Kubernetes ingress and somehow feeding certificates into it, you can instead just run your own Apache/Nginx/Caddy instance and manage it like a regular container with host ports 80/443 (setting up which might be a bit more difficult with Kubernetes, because by default you get access to ports upwards of 30000)
Kubernetes with Docker as the runtime - maybe with something like K3s, if you need relatively lightweight Kubernetes but also want to figure out what is going on with individual containers through the Docker CLI which is familiar and easy to work with, to dig down vs what something like containerd would let you do
Long story short, choose whatever is the best suited solution for your own needs and projects. Just want things to work and be pretty simple, modern technologies, hyperscalability and ecosystem be damned? Docker/Compose/Swarm. Want something with a bit more security and possibly even for running untrusted containers, with lots of scalability and projects built around the technologies? Podman/containerd/Kubernetes.
I've heard about Docker and Swarm being dead for years, yet it seems to work just fine. They even fixed the DNS weirdness on RPM distros (RHEL/Oracle Linux) in the 20.X releases i think, though personally i'm more inclined towards using the second-latest Ubuntu LTS because there's far less SELinux or other weirdness to be had (e.g. K3s clusters failing to initialize because of changes to cgroups). When it will actually die for real, i'll just use something like https://kompose.io/ to migrate over from the Compose format to Kubernetes.
Of course, none of that excuses you from having to learn Kubernetes, because that's what the industry has decided on. My approach is more akin to basing a new project on PHP 7 because you know that you don't need anything more.
On a different note, your employers asking you to setup Kubernetes and to launch Nexus, PostgreSQL and whatever else on a single node that has 8 GB of RAM, as well as run a bunch of Java services on it can be challenging to say the least, especially when the cloud is not in the cards, there are no pre-existing clusters in the org, there isn't the interest to get more resources and even if there was, then there'd also be thoughts along the lines of "why should we give this one project that many resources?" expressed. I'm slightly exaggerating, but oftentimes it can be akin to choosing to run Apache Kafka when RabbitMQ would have sufficed - someone else making the choice for you, pushing you into sub-optimal conditions and making you suffer as a result.
I recently went to Europe DevDays 2022 (https://devdays.lt/) and DevOps Pro Europe 2022 (https://devopspro.lt/) and one of the arguments expressed was along the lines of: "You should never host your own clusters, if you can. Just pay one of the big three platforms out there (AWS/GCP/Azure) to do it for you." What a crazy time to be alive, where running the full stack can be problematic and enterprise solutions are getting more and more detached from what smaller deployments and homelabs would actually need.
That said, Podman is getting closer and closer to full feature parity with Docker with every passing year and Kubernetes is also easier to run thanks to clusters like K3s/k0s/RKE and tools like Lens/k9s/Portainer/Rancher.
lolcat_cowsay|3 years ago
throwawei369|3 years ago
encryptluks2|3 years ago
flerp|3 years ago
nanna|3 years ago
scroot|3 years ago
CoolCold|3 years ago
I could not find answer for myself why Docker is any close to be dead and why Podman is the thing I should use instead of Docker immediately.
Q 1. Docker has some policy change and your company may need to pay for it - if you have > 250 persons/10 million revenue A 1: Indi/Solo devs out of scope. Enterprises probably fine with that anyways.
Q 2. Docker has limits for pulls from Docker hub!!!! You have 100 (200 with login) downloads/single IP for 6 hours interval. A 2: It was already mentioned, switching to Podman, while using Dockerhub doesn't magically helps. Moreover, practically I find it totally fine for Indi/Solo dev. For companies, who's amount of pulls can be higher - you want and have in place your local registry anyways to ensure Business Continuity and this doesn't bother you much.
Q 3. Running no background processes, running rootless is good because of ... A 3: On dev env (your local laptop, for example) you do not care much - your goal is ease of use. On production, running rootless rises question from me: * how you expect firewall (iptables) to be updated for port forwardings? * how you expect networks and bridges organized without root? * how you expect auto restart for container to happen on failure without supervising it? * some security advises and mitigation guides mention disabling user namespaces and was/is disabled by default in some distros https://news.ycombinator.com/item?id=28054823 - your security & system administration team may have such limits in place on production * those who care for intruder gets into container and can hijack system further use FireCracker or similar approach anyways [for production]
So what is left in "pros" for Podman, have I missed anything?
lolcat_cowsay|3 years ago
lloydatkinson|3 years ago
CottonMcKnight|3 years ago
akagusu|3 years ago
The market changed and Docker changed to stay relevant. Just that.
lkxijlewlf|3 years ago
user3939382|3 years ago
lapser|3 years ago
js4ever|3 years ago
[deleted]
pjmlp|3 years ago
matyasrichter|3 years ago