For example, your CI/CD agent is running in a container, and has to build and test containerized software. You can use something like buildah to build your app containers without a container runtime, but you want to run them to run the tests.
Ideally, you would want to do this while remaining platform-agnostic, so your agents will be able to start a container no matter where they are themselves: on bare metal, in Podman, in Docker, in k8s.
I was about to say the same thing. We currently use a podman container, deploy it in Kubernetes, build container images with it for CI, push them and scale down the podman container.
Ive been reading every line of this blog for a week trying to get it to work — and failed.
The goal is container driven development. But if all my dev tools are inside a container, how do I iterate on building new containers within a container? This is the core feature I wanted.
But unfortunately, this challenge proved way too difficult to pull off and nothing seemed to work for me.
My guess is these setups heavily depend on the host machine and how podman was setup on it which breaks the entire value of using container driven dev.
I ship containers to customers. Would be nice to run tests against them before releasing a container and it would be nice if those tests themselves could run in a container in order to have a clean test environment.
You could create that test environment image beforehand, mirror it and use it for the job that needs it. No need for adding another container within a container.
Ever since we started moving to Docker it's been a massive pain. I'm on a maintenance team doing both legacy (non-Docker) and the occasional Docker-based system that gets handed to us (and then a few months later taken back, it's kind of annoying), and not a single member of my team has gotten the Docker-based systems to work. We all end up with different issues getting Docker to run, or for the different containers to communicate, or being able to browse to the app running in the container. Different systems from different teams also have a tendency to interfere with each other. Two years after this started I finally got an admission from the main development team that every new dev on their team has had similar problems.
On the flipside, LXD containers have "just worked" for everyone who has tried it so far. We've been able to use them on a whim for testing stuff, no problems at all.
So I've been wondering for a while if we could use LXD containers to provide a clean slate to run the Docker containers inside of, maybe then we'd at least all have the same problems, if not be able to solve them entirely.
I use MicroOS (https://microos.opensuse.org/), to keep the base operating system clean you'd install helper tools for constructing containers in a container... so two levels of containers would be very helpful
Being able to run rootful stuff in rootless containers is nice for CI where you need for example to install a bunch of stuff or mount things. There are stuff that requires being root, and you might not want to give a real root access to your CI.
Here's another one. I have an app which spawns subprocesses to do computations. I would love to put the subprocesses in containers to constrain their resource usage (i know that's not the only way to do it, but it's an effective and well-understood way to do it). But i would also like to able to run my application in a container!
While using containers is common and well-understood, containers within containers are not. It's novel enough to warrant this blog post on how to do it!
If all you want is resource constraints on your spawned processes, it's easier and more common to just use cgroups. It's straightforward and you should have a working understanding of cgroups anyway if you want to be effective at using containers, which are built on top of cgroups.
Cgroups are really easy to use and I feel like people aren't bothering to learn about it. :(
Building containers within a CI system, for example. CI jobs run as images, with a DinD service attached, which they can use to build images themselves.
But even if I run in a container e.g. a Github actions runner I can use buildah to build my image right? So no need for another container. Why is there another layer needed?
Over complicating the engineering (Docker) involved in getting some x86 code into RAM and on a CPU stack, by over complicating further (DIND.) It's essentially another way to abstract and abstraction, and engineers are finding ways to justify it.
It's not really over complicating anything, though? It makes perfect sense, it's not hard to understand. Nested VMs have been a thing for a looong time.
I honestly don't understand where the criticism for Docker and k8s comes from. Some of HN comments sound like old men shouting at the clouds because they don't understand modern technology and its purposes.
If you want, go right ahead with your managed, on-site VMs where you copy a .php file to an Apache server using a thumb drive and restart the service. I don't care. But that's not how modern teams are working.
In e.g. the website example, the parent podman container does everything from running the integration tests to converting a video of the test at the end into a slowed down GIF and combines it with screenshots to put in the docs. It orchestrates the child podman containers - playwright + the webapp.
I'm certain that if all this tooling were run on different host machines it would have a myriad of "works on my machine" problems all over the place - whether due to mac, WSL, weird linux distros, github actions idiosyncrasies or whatever.
I'm equally certain that if the tooling were bundled in the app container it would needlessly fatten it with unnecessary and potentially conflicting dependencies. I don't want video conversion tooling installed in my web app even if I do want it to generate my docs.
orthoxerox|2 years ago
Ideally, you would want to do this while remaining platform-agnostic, so your agents will be able to start a container no matter where they are themselves: on bare metal, in Podman, in Docker, in k8s.
100011_100001|2 years ago
actionfromafar|2 years ago
Either way, it will be done. Ask IBM. What's the use case for running OS360 VMs inside OS360 VMs?
We repeat the same patterns IBM discovered in the 1960s, but with more building materials.
AtlasBarfed|2 years ago
qudat|2 years ago
The goal is container driven development. But if all my dev tools are inside a container, how do I iterate on building new containers within a container? This is the core feature I wanted.
But unfortunately, this challenge proved way too difficult to pull off and nothing seemed to work for me.
My guess is these setups heavily depend on the host machine and how podman was setup on it which breaks the entire value of using container driven dev.
https://bower.sh/opensuse-microos-container-dev
hkwerf|2 years ago
codethief|2 years ago
sgsag33|2 years ago
Izkata|2 years ago
On the flipside, LXD containers have "just worked" for everyone who has tried it so far. We've been able to use them on a whim for testing stuff, no problems at all.
So I've been wondering for a while if we could use LXD containers to provide a clean slate to run the Docker containers inside of, maybe then we'd at least all have the same problems, if not be able to solve them entirely.
freesocket|2 years ago
https://ubuntu.com/tutorials/how-to-run-docker-inside-lxd-co...
If you use podman you can just use the default storage backend (zfs).
zzbn00|2 years ago
Reventlov|2 years ago
sgsag33|2 years ago
twic|2 years ago
electroly|2 years ago
If all you want is resource constraints on your spawned processes, it's easier and more common to just use cgroups. It's straightforward and you should have a working understanding of cgroups anyway if you want to be effective at using containers, which are built on top of cgroups.
Cgroups are really easy to use and I feel like people aren't bothering to learn about it. :(
Hendrikto|2 years ago
sgsag33|2 years ago
VoodooJuJu|2 years ago
movedx|2 years ago
spyremeown|2 years ago
If you want, go right ahead with your managed, on-site VMs where you copy a .php file to an Apache server using a thumb drive and restart the service. I don't care. But that's not how modern teams are working.
hitchdev|2 years ago
https://github.com/hitchdev/hitchstory/tree/master/examples
In e.g. the website example, the parent podman container does everything from running the integration tests to converting a video of the test at the end into a slowed down GIF and combines it with screenshots to put in the docs. It orchestrates the child podman containers - playwright + the webapp.
I'm certain that if all this tooling were run on different host machines it would have a myriad of "works on my machine" problems all over the place - whether due to mac, WSL, weird linux distros, github actions idiosyncrasies or whatever.
I'm equally certain that if the tooling were bundled in the app container it would needlessly fatten it with unnecessary and potentially conflicting dependencies. I don't want video conversion tooling installed in my web app even if I do want it to generate my docs.