top | item 10269200

Don't expose the docker socket, even to a container

133 points| lvh | 10 years ago |lvh.io | reply

101 comments

order
[+] notjack|10 years ago|reply
This assumes that Docker containers are being used like VMs. They're not designed to allow running isolated arbitrary code in some sort of multi-tenancy setup, they're designed to isolate dependencies and configuration between your own deployed services. This is a "security vulnerability" in the same way that putting up a white picket fence around a jail is - a gross mis-application of a tool built for a completely different purpose.
[+] zurn|10 years ago|reply
Listening to what the Docker people write about security, they sure do sound like they are designed to be secure, even though they admit to a couple of potential shortcomings versus VMs.

https://blog.docker.com/2013/08/containers-docker-how-secure... concludes,

"Docker containers are, by default, quite secure; especially if you take care of running your processes inside the containers as non-privileged users (i.e. non root)."

They also recommend using SELinux with Docker to beef up security (see https://blog.docker.com/2014/07/new-dockercon-video-docker-s...)

As long as people are using containers as a security boundary it makes sense to pay attention to things like this one about the control socket.

[+] insaneirish|10 years ago|reply
> They're not designed to allow running isolated arbitrary code in some sort of multi-tenancy setup

You mean Linux containers are not designed for this, nor are they designed to be secure. What a sad design failure.

Some [1] container technology was developed with security as a first principle.

[1]: http://us-east.manta.joyent.com/jmc/public/opensolaris/ARChi...

[+] lvh|10 years ago|reply
Absolutely; as I mention in the article. There's nothing new or exciting here; there's simply a big discrepancy between reality and how a lot of users understand it. This is partially true for the my-dev-user-is-part-of-the-docker-group case, but even more so for the container-with-docker-sock-access case.
[+] nogox|10 years ago|reply
Check www.hyper.sh and https://github.com/hyperhq/runv

They can boot a new VM with Docker images in 200ms, which is very close to LXC, and perfectly isolated by hypervisor.

The problem of "Virtual Machine" is not "Virtual"/Virtualization, the problem is the full blown guest OS, aka "Machine".

[+] nailer|10 years ago|reply
Running docker inside VMs loses all the IO benefits. That's not to say I use docker at all: I don't for this reason. Someday it will be solved and docker will actually be as production ready as its proponents think it is.
[+] benwilber0|10 years ago|reply
Docker is not a sandbox and was never intended to be. A comprehensive SELinux profile for untrusted Docker containers could be developed, but I've yet to see one. If you want to run untrusted Docker containers, then this is what you want.
[+] 0x400614|10 years ago|reply
I don't understand the point of Docker. It seems like a great product. For any serious production grade containerization , I'd use a real virtualization solution like KVM, or VMWare.
[+] geofft|10 years ago|reply
It's containerization between trustworthy apps; it's not security containerization. What it gets you is, if you have one application that's designed to run well on RHEL 5 with /usr/bin/python pointing to Python 2.4, and another one that's designed to run well on Debian testing with a manual /usr/bin/python symlink to Python 3, you can give both of them what they want. This has nothing to do with security.

If you want Docker + security isolation, I'm intrigued by Clear Containers, which is a lightweight KVM-based virtualization thing:

https://lists.clearlinux.org/pipermail/dev/2015-September/00...

https://lwn.net/Articles/644675/

[+] Pyxl101|10 years ago|reply
Containerization and virtualization serve different purposes. VMs run actual operating systems within them. A single operating system runs many different containers, that each act something like processes running on that same OS, in a way where they're highly sandboxed and segmented from each other.

If your goal is strong isolation, then VMs are definitely better today. The purpose of Docker and similar container technologies is not that kind of isolation. It's to package up and distribute applications in a way that's more decoupled than simply installing them all on the same system.

[+] jkyle|10 years ago|reply
KVM & VMWare are not containerization, they're full virtualization.

There are a lot of benefits to containers and they don't have to be insecure. More efficient resource utilization and orders of magnitude faster allocation and launching to name two.

Google runs a significant portion of its internal operations in a container infrastructure and has for quite a while.[1]

They're perfectly capable of deployment into production environments.

I won't comment on docker as I haven't spent the time to fully grok all its warts.

1. http://research.google.com/pubs/pub43438.html

[+] AgentME|10 years ago|reply
Think of it like a tool for packaging and deploying applications with everything they need. Its purpose is closer to package managers than being a secure sandbox for running untrusted users' VMs.
[+] tbronchain|10 years ago|reply
We[1] believe that the point of Docker is to provide "application packages" (called containers), which is a big step ahead to deliver applications (using their words: build, ship, run).

However, we also do believe the isolation containers provide isn't sufficient for multi-tenant usages. This is the main motivation behind Hyper, which run groups of container images (Pods) as Virtual Machines.

[1] https://hyper.sh

[+] stephengillie|10 years ago|reply
Any virtualization solution is going to require you manage an operating system. One of the goals of contanerization is for developers to only work with the application.
[+] vacri|10 years ago|reply
If your VMs are single-purpose, you don't necessarily need VMs. Containers are single-process things - they're not running syslog or cron or any of that overhead, for example. Docker is also big on ensuring you are using the literal same artifact in dev as on prod (assuming you change your team's workflow, of course).

Which is the right thing to use is entirely dependent on your use case.

[+] raspasov|10 years ago|reply
Can someone describe a real-attack scenario, using Docker's default settings as of the latest version? I see a lot of people claiming that it's insecure but no concrete examples of exploits. I am not saying that it's as secure as VMs, or that it's inherently secure - I am genuinely* interested and really do care about this.

*We're in a limited beta of cloudmonkey.io, and we want to run unrelated/untrusted containers side by side securely.

[+] brianshaler|10 years ago|reply
I assumed that's what the video in the article was about [0], but I haven't watched it to be sure. Is there anything in particular that isn't clear from the article? Or are you asking about docker security concerns beyond what this article is about?

It seems the main point is if there is any way to exploit code running within a container that has unfettered root access to the host system via the docker socket, an attacker would then have complete control over the host system.

Exploitation is often mitigated in layers, where if Service A is exploited, an attacker can only rwx what and where Service A has been granted priveleges to rwx. That should be as little as possible, the bare minimum access that service needs to operate. There's no reason your web server or database should be able to install new programs, create users, etc.

If Service B is running in a container and is given access to write to the docker socket, suddenly any exploitation of that service opens a door to immediately have full and unfettered root access to the host system.

> [0] FTA "... ended up making a screencast to unambiguously demonstrate the flaw in their setup..."

[+] tobbyb|10 years ago|reply
A container is just running a process or chroot in its own namespace. If you run it as it is you have a single process container like Docker uses, if you run an init in the namespaced process you have LXC or OS containers that can support multiple processes like a lightweight VM. [1]

With LXC containers you start them as root and there is no lingering background LXC process running. Docker also starts containers as root but also has dockerd hanging around presumably so non root users can interface with it. But the container process is still running as root so dockerd seems a bit redundant and unnecesary.

This is because untill recently you couldn't run chroot as non root users and needed to run containers as root. But 'user namespaces' (> kernel 3.8) changes this and allows users to run processes in namespaces as a non root user. LXC has supported unprivileged containers for some time now [2] so you can run LXC containers as non root users, as in the entire container process is unprivileged. Docker and Rkt are working on this but its not simple to implement for container managers as non privileged users cannot access networking and mounts. But when it does presumably dockerd can run as an unprivileged process.

But Linux kernel namespaces have not been designed for multi-tenancy for instance cgroups are not namespace aware, and untill this changes in the kernel, containers will not provide the level of isolation or security required for multi-tenant workloads.

And containers managers like LXC or Docker that take these capabilities and merge them with networking and layered filesystems like aufs or overlayfs cannot work around this. Parallels OVZ is designed for multi tenancy but the kernel patch it appears is too large and invasive and doesn't look it will be merged.

So user namespaces is one level of security and isolation, you can also use seccomp, app armour, selinux or even grsec. But you have to find the middle ground between security and usability and given the relative confusion about containers, namespaces, and container managers it will take time to mature.

[1] https://www.flockport.com/how-linux-containers-work/

[2] https://www.flockport.com/lxc-using-unprivileged-containers/

[+] skarap|10 years ago|reply
To be honest, exposing host's /var/run/libvirtd.sock to a guest VM will have exactly the same consistencies.
[+] KirinDave|10 years ago|reply
Do... people really offer docker sockets to running containers without thoroughly vetting them first? Are people really that good at avoiding the warnings?

I mean, I know it's popular to pass the socket in for automatically re-configuring proxies... but I haven't seen any serious use outside of that.

[+] jayfk|10 years ago|reply
In other news, mounting / gives you access to the root filesystem.
[+] alpb|10 years ago|reply
dockerd -- nope. It's `docker -d`
[+] nailer|10 years ago|reply
'daemonisedthingd' is standard UNIX terminology.
[+] codemac|10 years ago|reply
Docker is a rootkit over HTTP.
[+] voltagex_|10 years ago|reply
>A rootkit is a collection of computer software, typically malicious, designed to enable access to a computer or areas of its software that would not otherwise be allowed (for example, to an unauthorized user) while at the same time masking its existence or the existence of other software.

Doesn't really describe Docker, does it?

[+] hosay123|10 years ago|reply
I don't really get this, the implication is the container becomes more secure without access to the socket, yet it has access to the hundreds of local kernel APIs with which on the average month it can easily gain higher privileges than root, especially on contemporary machines where half the admins around these days don't even know what a security update looks like
[+] KirinDave|10 years ago|reply
Why knowingly give a trivial breakout vector to code you don't trust?