top | item 12623508

“I just want to run a container”

117 points| chaghalibaghali | 9 years ago |jvns.ca | reply

27 comments

order
[+] dkarapetyan|9 years ago|reply
We do "just run containers" for our entire CI pipeline. It's all lxc/lxd and just a bunch of shell scripts to start/stop them. Works surprisingly well. So if you are just using containers as a sandboxed work runner then you don't need anything fancy. The issue is that I think people would like to pretend that containers are just like VMs and this is where things start to break down.

They're not VMs in the sense that none of the tried and true methods for orchestrating VMs is available. You need new solutions for networking, new solutions for storage, new failover patterns, new tools for clustering and organizing them, new application patterns, etc. Basically all the stuff that would have been handled by the hypervisor and the software defined networking layer is now all of a sudden in your face and you need some way to deal with it.

[+] cptskippy|9 years ago|reply
You're right but for the majority of enterprise developers this isn't a concern. Much of the time they do not understand the orchestration at all. If a configuration in an environment gets fudged then they'll either punt to someone else or stick their head in the sand and use it as an excuse at stand ups as to why they're not completing tasks.
[+] jacques_chester|9 years ago|reply
> As far as I can tell running containers without using Docker or Kubernetes or anything is totally possible today

It's been possible since before either of these existed. There are several container and orchestration systems that predate both.

My own pet faves are Garden (née Warden) and Diego, but that's probably because I work at the company (Pivotal) where they were born.

[+] madmax96|9 years ago|reply

    > let's say all my 50 containers share a bunch of files 
      (shared libraries like libc, Ruby gems, a base operating  
      system, etc.). It would be nice if I could load all those 
      files into memory just once, instead of 3 times.
Correct me if I'm wrong, but doesn't this kind of situation seem like a poor use-case of containers? It seems to me that one of the main points of containerization is to encapsulate the runtime dependencies of a process. If you're conflating that by making two containers depend on the same runtime objects then the point of containerization has been lost. You might as well go back to a virtual machine. That's not to suggest that there are not circumstances where overlay networks and filesystems aren't useful, just that you should not be using them to manage dependencies.

Under this architecture, what happens when I want to update my applications to use a new version of a shared library? I either am forced to update all of my applications at once or I must modify the architecture and remove that shared dependency. This breaks down the composition that containerization promises.

I think that this advice should be re-examined. I am by no means an expert, but this doesn't seem smart to me...

[+] dkarapetyan|9 years ago|reply
This is exactly how the overlay filesystem in docker works. You make a base container with common runtime dependencies and then you layer applications that require those same dependencies on top. The applications can be quite dissimilar. I don't see why the point of containerization has been lost? Just because the same kind of thing is hard to do without an overlay filesystem doesn't mean there is anything wrong with the approach.
[+] duck2|9 years ago|reply
I don't see why systemd is at the core of all those graphs. Why do we need that particular program to run containers? Or does systemd mean, in this context, "any daemon-controlling process"?
[+] darfs|9 years ago|reply
Think it's a "process Graph"... and Process ID 1 is systemd on the[/her] machine. Edit: turns Out, she explains: [...]systemd: rkt will run a program called systemd-nspawn as the init (PID 1) process inside your container. This is because it can be bad to run an arbitrary process as PID 1[0] -- your process isn't expecting it and will might react badly. It also run some systemd-journal process? I don't know what that's for yet.[...]"

[0] https://engineeringblog.yelp.com/2016/01/dumb-init-an-init-f...

[+] CSDude|9 years ago|reply
Because, systemd is like a mafia in modern(!) linux distros, you have to pay tribute to it by integrating with it, because it is an all controlling daemon. Not trying to flamewar here, but as you do, I really dont understand why we need to have systemd integration for just doing anything.
[+] phantom_oracle|9 years ago|reply
Can anyone deeply involved in the Hosting/Ops field, explain to me why LXC/LXD is ignored over the other options?

I see the top comment (dkarapetyan) mentions it, but you never really read of blogposts discussing how they scaled their LXC containers, etc.

[+] ams6110|9 years ago|reply
I use lxc in production, though admittedly my needs are small. I like it because it's there with Linux, can be managed with shell scripts or ansible and doesn't feel so much like I'm building on shifting sand like Docker or Kubernetes.
[+] ChoHag|9 years ago|reply
LXC/LXD is not fashionable.
[+] wyldfire|9 years ago|reply
> If I'm running 50 containers I don't want to have 50 copies of all my shared libraries in memory. That's why we invented dynamic linking!

BTW there's a cool feature called Kernel Samepage Merging [1] that was created for the sake of conserving memory consumed in virtualization or container use cases like this.

[1] https://www.kernel.org/doc/Documentation/vm/ksm.txt

[+] tadfisher|9 years ago|reply
nspawn + btrfs is my preferred solution to the "50 containers" problem. The incantation you want is:

    systemd-nspawn --template="/path/to/subvolume" <other args>
This creates a copy-on-write snapshot of the subvolume you supply, which is instantaneous. The --ephemeral flag can be used instead if you want the guest to be able to modify the base filesystem but you do not want those changes to persist across container boots.

Can someone describe what advantages rkt gives you over plain nspawn containers?

[+] ComputerGuru|9 years ago|reply
Probably the fact that it doesn't rely on btrfs. (Side note: having been spoiled by zfs on FreeBSD, my btrfs experience can best be summed up as "never again.")
[+] AlexandrP|9 years ago|reply
Picture from article, especially right part (docker > 1.11.0) is that true? [0]

I'm not software architect, but when I see this, it seems to me that something deeply wrong with implementation or with technology itself.

[0] http://jvns.ca/images/docker-rkt.png

[+] icebraining|9 years ago|reply
I don't know, it's the natural result of following the Unix philosophy: modularizing the system into multiple processes that do just one thing. I regularly run commands in my shell with more complex architectures (find + xargs + grep + ...).
[+] BrandoElFollito|9 years ago|reply
Since one will be running systemd-nspawn, why not going natively? The installation is easy (with debootstrap for instance)