top | item 30385580

LXC vs. Docker

217 points| lycopodiopsida | 4 years ago |earthly.dev | reply

143 comments

order
[+] buybackoff|4 years ago|reply
LXC via Proxmox is great for stateful deployments on baremetal servers. It's very easy to backup entire containers with the state (SQLite, Postgres dir) to e.g. NAS (and with TrueNAS then to S3/B2). Best used with ZFS raid, with quotas and lazy space allocation backups are small or capped.

Nothing stops one from running Docker inside LXC. For development I usually just make a dedicated priviledged LXC container with nesting enabled to avoid some known issues and painful config. LXC containers could be on a private network and a reverse proxy on the host could map to the only required ports, without thinking what ports Docker or oneself could have accidentally made public.

[+] ansible|4 years ago|reply
We do something similar with btrfs as the filesystem. There have been some issues with btrfs itself, but the LXC side of this has worked pretty good. Any significant storage (such as project directories) is done with a bind mount into the container, so that it is easy to separately snapshot the data or have multiple LXC containers on the same host access the same stuff. That was more important when we were going to run separate LXC containers for NFS and Samba fileservers, but we ended combining those services into the same container.
[+] lostlogin|4 years ago|reply
Good comment. It was a revelation to me when I used Proxmox and played with LXCs. Getting an IP per container is really nice.
[+] petre|4 years ago|reply
It's an annoying that you can only make snapshots on a stopped container. With VMs it works in a running VM.
[+] ignoramous|4 years ago|reply
> LXC via Proxmox is great for stateful deployments on baremetal

Reminds me of (now defunct?) flockport.com

They had some interesting demos up on YouTube, showcasing what looked like a sandstorm.io esque setup.

[+] LilBytes|4 years ago|reply
You've basically described my homelab set up here.

Proxmox, a few LXC, each with their own containerisation running.

[+] dottedmag|4 years ago|reply
Apples to oranges.

LXC can be directly compared with a small, and quite insignificant, part of Docker: container runtime. Docker became popular not because it can run containers, many tools before Docker could do that (LXC included).

Docker became popular because it allows one to build, publish and then consume containers.

[+] unixhero|4 years ago|reply
LXC has been so stable and great to work with for many years. I have had services in production on LXC containers and it has been a joy. I can not say the same about things I have tried to maintain in production with Docker, in which I had similar experiences to [0], albeit around that time and therefore arguably not recently.

For a fantastic way to work with LXC containers I recommend the free and open Debian based hypervisor distribution Proxmox [1].

[0], https://thehftguy.com/2016/11/01/docker-in-production-an-his...

[1], https://www.proxmox.com/en/proxmox-ve

[+] yokem55|4 years ago|reply
LXD (Canonical's daemon/API front end to lxc containers) is great -- as long as you aren't using the god awful snap package they insist on. The snap is probably fine for single dev machines, but it has zero place in anything production. This is because canonical insists on auto-updating and refreshing the snap at random intervals, even when you pin to a specific version channel. Three times I had to manually recover a cluster of lxd systems that broke during a snap refresh because the cluster couldn't cope with the snaps all refreshing at once.

Going forward we built and installed lxd from source.

[+] alyandon|4 years ago|reply
I got so annoyed with snapd that I finally patched the auto-update functionality to provide control via environment variable. It's ridiculous that this is what I have to personally go through in order to maintain control of when updates are applied on my own systems.

If enough people were to ever decide to get together and properly fork snapd and maintain the patched version I'd totally dedicate time to helping out.

https://gist.github.com/alyandon/97813f577fe906497495439c37d...

[+] stingraycharles|4 years ago|reply
Makes you wonder whether Canonical has any idea about operating servers. Auto-updating packages is the last thing you want. Doing that for a container engine, without building in some jitter to avoid the scenario you described is absolutely insane.

Who even uses snap in production? If I squint my eyes I can see the use for desktops, but why insist on it for server technologies as well?

[+] CSDude|4 years ago|reply
I had a huge argument in 2015 with a guy that wanted to move our every custom .deb package (100+) to Snap, because they had talked with Canonical and it would be the future, Docker would be obsolete. Main argument was to make distribution easier to worker/headless/server machines. Not that Docker is a direct replacement, but Snap is an abomination. They are mostly out of date, most of them requires system privilleges, unstable and the way they mount compressed rootfs is making starts very slow, even on a good machine.

That all being said, LXD is great way to run non-ephemeral containers that behave more like a VM. Also checkout multipass, by Canonical that also makes spinning up Ubuntu VMs as easy as Docker.

[+] warent|4 years ago|reply
On my Ubuntu 20 server, I tried setting up microk8s with juju using LXD and my god the experience was horrendous. One bug after another after another after another after another. Then I upgraded my memory and somehow snap/LXD got perma stuck in an invalid state. The only solution was to wipe and purge everything related to snap/LXD.

After that I setup minikube with a Docker backend. It all worked instantly, perfectly aligned with my mental model, zero bugs, zero hassle. Canonical builds a great OS, but their Snap/VM org is... not competitive.

[+] pkulak|4 years ago|reply
Not even kidding, a huge part of what made me move to Arch was that it's one of the few distros that packages LXD. Apparently it's a pain, but I'm forever grateful!
[+] rlpb|4 years ago|reply
> This is because canonical insists on auto-updating and refreshing the snap at random intervals, even when you pin to a specific version channel.

You can control snap updates to match your maintenance windows, or just defer them. Documentation here: https://snapcraft.io/docs/keeping-snaps-up-to-date#heading--...

What you cannot do without patching is defer an update for more than 90 days. [Edit: well, you sort of can, by bypassing the store and "sideloading" instead: https://forum.snapcraft.io/t/disabling-automatic-refresh-for...]

[+] whitepoplar|4 years ago|reply
Just curious--how do you use LXD in production? It always struck me as something very neat/useful for dev machines, but I had trouble imagining how it would improve production workloads.
[+] conradfr|4 years ago|reply
Maybe two years ago I wanted to use LXD on a fresh Ubuntu server (after testing it locally).

First they had just moved it to Snap which was not a great install experience compared to good old apt-get, and then all my containers had no IPv4 because of systemd for a reason I can't remember.

After two or three tries I just gave up, installed CapRover (still in use today) and have not tried again since.

[+] rambojazz|4 years ago|reply
I was bitten by LXD auto-updating as well. Server was down and I couldn't understand why since I hadn't changed anything.
[+] _448|4 years ago|reply
> The snap is probably fine for single dev machines

It is not good even on single dev machines.

[+] rajishx|4 years ago|reply
I am simple man with simple need, I am perfectly happy with a distro as long I have my editor my terminal and my browser.

I could not bear the snaps on ubuntu always coming back and hard to disable on every update, I gave up and just switched to arch and happy to have control on my system again.

I had a lot of crash running on Ubuntu when running huge rust based test suite doing a lot of IO (on btrfs), never had that issue on arch. not sure why, not sure how I can even debug it (full freeze, nothing in systemd logs) so I guess I just gave up.....

[+] anothernewdude|4 years ago|reply
Canonical always backs the wrong horse. Unity, Snap, Mir, Upstart, etc. etc.
[+] baggy_trough|4 years ago|reply
Yeah, it's truly terrible. I've had downtime from this as well.
[+] lasftew|4 years ago|reply
My home server runs Nixos, which is an amazing server operating system: every service is configured in code and fully versioned. I also use this server for development (via SSH), but while Nixos can be used for development, it's relationship with VS Code, its plugins, and many native build tools (Golang, Rust) is very complicated, and I prefer not to do everything the Nix way, which is usually convoluted and poorly documented.

LXD is my perfect fit in this scenario: trivial to install on top of Nixos, and once running, allows for launching some minimal development instances of whatever distro flavor of the day in a few seconds. Persistent like a small VM, but booting up within seconds, much more efficient on resources (memory in particular), and - unlike docker - with the full power of systemd and all. Add tailscale and sshd to the mix, for easy, secure and direct remote access to the virtualized system.

[+] adamgordonbell|4 years ago|reply
I like the docker way of one thing, one process, per container. LXC seems a bit different.

However, an exciting thing to me is the Cambrian explosion of alternatives to docker: podman, nerdctl, even lima for creating a linux vm and using containerd on macos looks interesting.

[+] umvi|4 years ago|reply
Docker can have N processes per container though, just depends how you set up your image
[+] merlinscholz|4 years ago|reply
I recently started using containerd inside Nomad, a breath of fresh and simple air after failed k8s setups!
[+] wanderr|4 years ago|reply
That seems weird for some stacks though, like nginx, php-fpm, php. At least I still haven't wrapped my head around what's the right answer for the number of containers involved there.
[+] istoica|4 years ago|reply
The perfect pair

Containerfile vs Dockerfile - Infra as code

podman vs docker - https://podman.io

podman desktop companion (author here) vs docker desktop ui - https://iongion.github.io/podman-desktop-companion

podman-compose vs docker-compose = there should be no vs here, docker-compose itself can use podman socket for connection OOB as APIs are compatible, but an alternative worth exploring nevertheless.

Things are improving at a very fast pace, the aim is to go way beyond parity, give it a chance, you might enjoy it. There is continuous active work that is enabling real choice and choice is always good, pushing everyone up.

[+] melenaboija|4 years ago|reply
I use LXC containers as my development environments.

When I changed my setup from expensive Mac Books to an expensive work station with a cheap laptop as front end to work remotely this was the best configuration I found.

It took me few hours to have everything running but I love it now. New project is creating a new container add a rule to iptables and I have it ready in few seconds.

[+] dijit|4 years ago|reply
FWIW I do the same thing but with docker.

Exposing the docker daemon on the network and setting DOCKER_HOST I’m able to use the remote machine as if it was local.

It’s hugely beneficial, I’ve considered making mini buildfarms that load balance this connection in a deterministic way.

[+] bamboozled|4 years ago|reply
One major limitation of LXC is that there is no way to easily self host images. Often the the official images for many distributions are buggy. For example, the official Ubuntu images seem to come with a raft of known issues.

Based on my limited interactions with it, I'd recommend staying away from LXC unless absolutely neccesary.

[+] fuzzy2|4 years ago|reply
I’ve been using LXC as a lightweight “virtualization” platform for over 5 years now, with great success. It allows me to take existing installations of entire operating systems and put them in containers. Awesome stuff. On my home server, I have a VNC terminal server LXC container that is separate from the host system.

Combined with ipvlan I can flexibly assign my dedicated server’s IP addresses to containers as required (MAC addresses were locked for a long time). Like, the real IP addresses. No 1:1 NAT. Super useful also for deploying Jitsi and the like.

I still use Docker for things that come packaged as Docker images.

[+] heresie-dabord|4 years ago|reply
> . It allows me to take existing installations of entire operating systems and put them in containers

Friend, do you have documentation for this process? Please share your knowledge. ^_^

[+] sickygnar|4 years ago|reply
I never hear systemd-nspawn mentioned in these discussions. It ships and integrates with systemd and has a decent interface with machinectl. Does anyone use it?
[+] numlock86|4 years ago|reply
> I never hear systemd-nspawn mentioned in these discussions. It ships and integrates with systemd and has a decent interface with machinectl.

I couldn't have said it better. And yes, I use it. Also in production systems.

[+] goombacloud|4 years ago|reply
The big missing feature that's lacking is to pull Docker images and run them without resorting to hacks.
[+] leephillips|4 years ago|reply
That’s what I use whenever I need a container. So simple and flexible.
[+] password4321|4 years ago|reply
Is it accurate to say LXC is to Docker as git is to GitHub, or vim/emacs vs. Visual Studio Code?

I haven't seen many examples demonstrating the tooling used to manage LXC containers, but I haven't looked for it either. Docker is everywhere.

[+] sarusso|4 years ago|reply
I recently wrote something to clarify my mind around all this [1]. If we assume that by Docker we mean the Docker engine, then I think you might compare it as you said (maybe more in terms as vim/emacs vs. Visual Studio Code as Git is a technology while GitHub is a platform).

But Docker is many things: a company, a command line tool, a container runtime, a container engine, an image format, a registry...

[1] https://sarusso.github.io/blog_container_engines_runtimes_or...

[+] haolez|4 years ago|reply
In the first months of Docker, yes. Nowadays, they are different beasts.
[+] throwawayboise|4 years ago|reply
lxc launch, lxc list, lxc start, lxc stop, etc....

That's all I've ever needed. Docker is overkill if you just need to run a few containers. There is a point where it makes sense but running a few containers for a small/personal project is not it.

[+] ricmm|4 years ago|reply
LXC and Docker comparisons vastly differ depending on the use case and problem segment. I use LXC as a tiny, C-only library to abstract namespaces and cgroups for embedded usage [1]

LXC is a fantastic userland library to easily consume kernel features for containerization without all the noise around it… but the push for the LXD scaffolding around it missed the mark. It should’ve just been a great library and that’s how we use it when running containers on embedded Linux equipment

[1] https://pantacor.com/blog/lxc-vs-docker-what-do-you-need-for...

[+] micw|4 years ago|reply
A while ago, I spent some time to make LXC run in a docker container. The idea is to have a statefull system managed by LXC run in a docker environment so that management (e.g. Volumes, Ingress and Load Balancer) from K8S can be used for the LXC containers. I still run a few desktops which are accessible by x2go with it on my kubernetes instances.

https://github.com/micw/docker-lxc

[+] malkia|4 years ago|reply
I know very little about both, but I'm at the mercy everyday with lxc on my chromebook when running crostini (it's like a VM in a VM in a VM in a...) :) - works great though, at some perf cost, and less GPU support.

And still having troubles running most of the docker images out there (either this, or that won't be supported). I guess it makes sense, after all there is always the choice of going with full real linux reinstall, or some other hacky ways.

But one thing I was not aware was this: "Docker containers are made to run a single process per container."

[+] sarusso|4 years ago|reply
Interesting read, not sure why you compared only these two though.

There are a plenty of other solutions and Docker is actually many things.. You can use Docker to run containers using Kata for example, which is a runtime providing full HW virtualisation.

I wrote something similar, yet much less in detail on Docker and LXC and more as a bird-eye overview to clarify terminology, here: https://sarusso.github.io/blog_container_engines_runtimes_or...

[+] kristianpaul|4 years ago|reply
At the end the two are different.. why comparing the in the first place?

“ LXC, is a serious contender to virtual machines. So, if you are developing a Linux application or working with servers, and need a real Linux environment, LXC should be your go-to.

Docker is a complete solution to distribute applications and is particularly loved by developers. Docker solved the local developer configuration tantrum and became a key component in the CI/CD pipeline because it provides isolation between the workload and reproducible environment.”

[+] ruhrharry|4 years ago|reply
LXC is quite different from Docker. Docker is used most of the time as an containerized package format for servers and as such is comparable to snap or flatpak on the desktop. You don't have to know Linux administration to use Docker, that is why it is so successfull.

LXC on the other hand is lightweight virtualization and one would have a hard time to use it without basic knowledge of administering Linux.

[+] theteapot|4 years ago|reply
> Saying that LXC shares the kernel of its host does not convey the whole picture. In fact, LXC containers are using Linux kernel features to create isolated processes and file systems.

So what is Docker doing then??

[+] p0d|4 years ago|reply
I've been running my saas on lxc for years. I love that the container is a folder to be copied. Combined with git to push changes to my app all is golden.

I tried docker but stuck with lxc.