Just to clarify on the OSX support: obviously we did not magically get Darwin to support linux containers. But we put together the easiest possible way to run Linux containers on a Mac without depending on another machine.
We do this by combining (1) docker in "client mode", which connects to (2) a super-lightweight linux VM using boot2docker.
It is especially important to set DOCKER_HOST=tcp:/// before you run "boot2docker init" -- I forgot to do this initially and things failed mysteriously. I had to "boot2docker delete" and re-init to get things running.
Once I got that ironed out, everything is running very smoothly, and I don't have to ssh into the VM to do things. Nicely done.
My wish for 0.9 is a more streamlined installation process, possibly by simply incorporating these steps into a Homebrew formula.
I've followed and refollowed those steps on OS X 10.9.1, but this is what happens:
» docker version
Client version: 0.8.0
Go version (client): go1.2
Git commit (client): cc3a8c8
2014/02/05 23:10:55 unexpected EOF
Yet the docker server is definitely up:
docker@boot2docker:~$ docker version
Client version: 0.8.0
Go version (client): go1.2
Git commit (client): cc3a8c8
Server version: 0.8.0
Git commit (server): cc3a8c8
Go version (server): go1.2
Last stable version: 0.8.0
Tried both `export DOCKER_HOST=tcp://` and `export DOCKER_HOST=localhost` (as per boot2docker README), before re-init.
Was the OS X binary built without cgo? I can't seem to access containers in private https registries:
$ docker login https://registry.example.com
2014/02/05 14:36:20 Invalid Registry endpoint: Get https://registry.example.com/v1/_ping: x509: failed to load system roots and no roots provided
The hostname in question has a valid SSL certificate. I encountered a similar problem in the past with Go built from homebrew[1][2]. Has anyone else seen this?
Glad to hear OS X has official support. I jumped into Docker for the first time last week and have a burning unresolved question for those using boot2docker.
What is your development workflow? I am working on a Rails app, so my instinct is to have a shared folder between OS X and boot2docker, but afaik this is not supported as boot2docker doesn't support VirtualBox guest extensions.
It turns out that shared folders are not a sustainable solution (independently of whether boot2docker supports them), so the best practices are converging towards this:
1) While developing, your dev environment (including the source code and method for fetching it) should live in a container. This container could be as simple as a shell box with git and ssh installed, where you keep a terminal open and run your unit tests etc.
2) To access your source code on your host machine (eg. for editing on your mac), export it from your container over a network filesystem: samba, nfs or 9p are popular examples. Then mount that from your mac. Samba can be natively mounted with "command-K". NFS and 9p require macfuse.
3) When building the final container for integration tests, staging and production, go through the full Dockerfile + 'docker build' process. 'docker build' on your mac will transparently upload the source over the docker remote API as needed.
There are several advantages to exporting the source from the container to the host, instead of the other way around:
- It's less infrastructure-specific. If you move from virtualbox to vmware, or get a Linux laptop and run docker straight on the metal, your storage/shared folders configuration doesn't change: all you need is a network connection to the container.
- Network filesystems are more reliable than shared folders + bind-mount. For example they can handle different permissions and ownership on both ends - a very common problem with shared folders is "oops the container creates files as root but I don't have root on my mac", or "apache complains that the permissions are all wrong because virtualbox shared folders threw up on me".
That said, we need to take that design insight and turn it into a polished user experience - hopefully in Docker 0.9 this will all be much more seamless!
Is Docker a good way to bring more security on a server with a few different websites? Separating the sites from each other and run nginx as a proxy in front of them?
Unfortunately Docker prevents hosting environments from employing some of the most potent security mitigations added to Linux recently.
You cannot treat a docker container like a virtual machine – code running in the container has almost unfettered access to the parent kernel, and the millions of lines of often-buggy C that involves. For example with the right kernel configuration, this approach leaves the parent machine vulnerable to the recent x86_32 vulnerability (http://seclists.org/oss-sec/2014/q1/187) and many similar bugs in its class.
The algorithms in the running kernel are far more exposed too - instead of managing a single process+virtual network+memory area, all the child's resources are represented concretely in the host kernel, including its filesystem. For example, this vastly increases the likelihood that a child could trigger an unpatched DoS in the host, e.g. the directory hashing attacks that have effected nearly every filesystem implementation at some point (including btrfs as recently as 2012).
The containers code in Linux is also so new that trivial security bugs are being found in it all the time – particularly in sysfs and procfs. I don't have a link right now, though LWN wrote about one a few weeks back.
While virtual machines are no security panacea, they diverge in what classes of bugs they can be affected by. Recent Qemu/libvirt supports running under seccomp.. ensuring even if the VM emulator is compromised, the host kernel's exposure remains drastically limited. Unlike qemu, you simply can't apply seccomp to a container without massively reducing its usefulness, or using a seccomp policy so liberal that it becomes impotent.
You could use seccomp with Docker by nesting it within a VM, but at that point Docker loses most of its value (and could be trivially replaced by a shell script with a cute UI).
Finally when a bug is discovered and needs to be patched, or a machine needs to be taken out of service, there is presently no easy way to live-migrate a container to another machine. The most recent attempt (out of I think 3 or 4 now) to add this ability to Linux appears to have stalled completely.
As a neat system for managing dev environments locally, it sounds great. As a boundary between mutually untrusted pieces of code, there are far better solutions, especially when the material difference in approaches amounts to a few seconds of your life at best, and somewhere south of 100mb in RAM.
It depends. If the goal is to consolidate several boxes to a single VM, it does this. Be sure host-based (on the linux box) firewall rules are set and documented. If possible, set network-based firewall rules also (AWS security groups).
So what's the solution for 'root inside a docker container is root on the host'?
We'd like to ship a set of utilities as a docker container, but unless the sysadmin gives everyone 'sudo' privileges on the server (unlikely and insecure), they can't run the container and its utilities.
Future versions of the Docker API will natively support scoping. This means that each API client will see a different subset of the Docker engine depending on the credentials and origin of the connection. This will be implemented in combination with introspection, which allows any container to open a connection to the Docker engine which started it.
When you combine scoping and introspection, you get really cool scenarios. For example, let's say your utility is called "dockermeister". Each individual user could deploy his own copy of dockermeister, in a separate container. Each dockermeister container would in turn connect to Docker (via introspection), destroy all existing containers, and create 10 fresh redis containers (for reasons unknown). Because each dockermeister container is scoped, it can only remove containers that are its children (ie that were created from the same container at an earlier time). So they cannot affect each other. Likewise, the 10 new redis containers will only be visible to that particular user, and not pollute the namespace of the other users.
Of course scoping works at arbitrary depth levels... so you could have containers starting containers starting containers. Containers all the way down :)
Awesome that OSX support is now official, but is there any benefit to using this process as opposed to using docker-osx? https://github.com/noplay/docker-osx
The official installation process seems more complicated, and I don't really see an advantage.
I'm curious about how the focus on multiple, ABI-incompatible platforms will affect the pace and momentum of Docker development. So far, Docker has benefitted a lot from the focus on amd64 userland on Linux.
Personally, when I read "OSX support", I thought that meant that there would now be containers with Darwin-ABI binaries inside them. So on Linux, you'd use cgroups for Linux-ABI binaries and a VM for Darwin-ABI, just as on OSX you use a VM for Linux-ABI (and presumably would use the OSX sandbox API for Darwin-ABI containers.)
This "native sandboxing for own-ABI if available, VM if not, and VM for everything else" approach would extend to any other platform as well, I'd think (Windows, for example.) I'm surprised that this isn't where Docker is going, at least for development and testing of containers.
(Though another alternative, probably more performant for production, would be something like having versions of CoreOS for each platform--CoreOS/Linux, CoreOS/Darwin, CoreOS/NT, and so on--so you'd have a cluster of machines with various ABIs, where any container you want to run gets shipped off to a machine in the cluster with the right ABI for it.)
IMO, it would be fantastic if there was something like Docker for Windows. Imagine being able to bundle up games in individual containers and easily being able to move them from machine to machine as you upgrade. Same thing applies for other Windows apps.
I just tried the docker interactive tutorial. It was fun, but I still don't get the point of using docker. Just been hearing a lot about it, and its getting too much buzz.
I am interested in the BTRFS support in particular, it is clear that performance in FS is key. However, what I like the most about Docker is the ability to use layers and diff them. In effect, I want version control for images, because it allows me to not run extra provisioning tools for the images (just rely on simply Zookeeper stuff for app config). Whatever gives me 'vcs' for images in the most performant way, wins in my book.
Not exactly. The article says the first number is for major lifecycle events, ie 1.0 means "production ready". They'll be releasing monthly and the second number will be the release increment. The third number will be for patches and fixes.
So to me that doesn't fit in with the Semantic Versioning contract. I think the product is too young yet to use a version scheme that assumes relative API stability.
It's confusing why btrfs support was prioritized ahead of zfs considering zfs' superior architecture and ops capabilities. Is docker (formerly dotcloud) going to start withholding capabilities as licensed features?
We've tried to be plainly open that going the 'open core' route is in no ones best interest.
Swappable storage engines will be easier to create over time, not less. There's also a ZFS branch, but the reality is people spent time and resources on getting BTRFS (which has been experimental for >6mos) instead of ZFS.
Docker development works a lot like Linux development (just on a much, much smaller scale.) If there's an area where you're comfortable committing, the barrier to entry is minimal. All you need is 2 maintainers to agree to your addition and its merged in. So get on it!
It's common and easy to mount your host FS into the container, putting mutating data where you can take full advantage of the superior architecture and ops capabilities of whichever FS you prefer.
The images' internal AUFS/BTRFS layers are then only for keeping your binaries-at-rest and static configuration straight. They may as well be in highly indexed ZIP files, for all you care.
It's not necessary and we didn't go out of our way to get it. We just happened "for free" as a result of writing portable code, a clean client-server architecture and the appearance of the boot2docker project in the community.
[+] [-] shykes|12 years ago|reply
We do this by combining (1) docker in "client mode", which connects to (2) a super-lightweight linux VM using boot2docker.
The details are on http://docs.docker.io/en/latest/installation/mac/
[+] [-] joeshaw|12 years ago|reply
Once I got that ironed out, everything is running very smoothly, and I don't have to ssh into the VM to do things. Nicely done.
My wish for 0.9 is a more streamlined installation process, possibly by simply incorporating these steps into a Homebrew formula.
[+] [-] ashleyw|12 years ago|reply
[+] [-] oellegaard|12 years ago|reply
[+] [-] unknown|12 years ago|reply
[deleted]
[+] [-] steeve|12 years ago|reply
[deleted]
[+] [-] joeshaw|12 years ago|reply
[1] https://github.com/Homebrew/homebrew/pull/17758 [2] https://code.google.com/p/go/issues/detail?id=4791
Update: Filed a bug against docker, others are having the same issue. https://github.com/dotcloud/docker/issues/3946
[+] [-] crosbymichael|12 years ago|reply
[+] [-] mattbessey|12 years ago|reply
What is your development workflow? I am working on a Rails app, so my instinct is to have a shared folder between OS X and boot2docker, but afaik this is not supported as boot2docker doesn't support VirtualBox guest extensions.
[+] [-] shykes|12 years ago|reply
It turns out that shared folders are not a sustainable solution (independently of whether boot2docker supports them), so the best practices are converging towards this:
1) While developing, your dev environment (including the source code and method for fetching it) should live in a container. This container could be as simple as a shell box with git and ssh installed, where you keep a terminal open and run your unit tests etc.
2) To access your source code on your host machine (eg. for editing on your mac), export it from your container over a network filesystem: samba, nfs or 9p are popular examples. Then mount that from your mac. Samba can be natively mounted with "command-K". NFS and 9p require macfuse.
3) When building the final container for integration tests, staging and production, go through the full Dockerfile + 'docker build' process. 'docker build' on your mac will transparently upload the source over the docker remote API as needed.
There are several advantages to exporting the source from the container to the host, instead of the other way around:
- It's less infrastructure-specific. If you move from virtualbox to vmware, or get a Linux laptop and run docker straight on the metal, your storage/shared folders configuration doesn't change: all you need is a network connection to the container.
- Network filesystems are more reliable than shared folders + bind-mount. For example they can handle different permissions and ownership on both ends - a very common problem with shared folders is "oops the container creates files as root but I don't have root on my mac", or "apache complains that the permissions are all wrong because virtualbox shared folders threw up on me".
That said, we need to take that design insight and turn it into a polished user experience - hopefully in Docker 0.9 this will all be much more seamless!
[+] [-] odonnellryan|12 years ago|reply
[+] [-] stesch|12 years ago|reply
What's the overhead?
[+] [-] _wmd|12 years ago|reply
You cannot treat a docker container like a virtual machine – code running in the container has almost unfettered access to the parent kernel, and the millions of lines of often-buggy C that involves. For example with the right kernel configuration, this approach leaves the parent machine vulnerable to the recent x86_32 vulnerability (http://seclists.org/oss-sec/2014/q1/187) and many similar bugs in its class.
The algorithms in the running kernel are far more exposed too - instead of managing a single process+virtual network+memory area, all the child's resources are represented concretely in the host kernel, including its filesystem. For example, this vastly increases the likelihood that a child could trigger an unpatched DoS in the host, e.g. the directory hashing attacks that have effected nearly every filesystem implementation at some point (including btrfs as recently as 2012).
The containers code in Linux is also so new that trivial security bugs are being found in it all the time – particularly in sysfs and procfs. I don't have a link right now, though LWN wrote about one a few weeks back.
While virtual machines are no security panacea, they diverge in what classes of bugs they can be affected by. Recent Qemu/libvirt supports running under seccomp.. ensuring even if the VM emulator is compromised, the host kernel's exposure remains drastically limited. Unlike qemu, you simply can't apply seccomp to a container without massively reducing its usefulness, or using a seccomp policy so liberal that it becomes impotent.
You could use seccomp with Docker by nesting it within a VM, but at that point Docker loses most of its value (and could be trivially replaced by a shell script with a cute UI).
Finally when a bug is discovered and needs to be patched, or a machine needs to be taken out of service, there is presently no easy way to live-migrate a container to another machine. The most recent attempt (out of I think 3 or 4 now) to add this ability to Linux appears to have stalled completely.
As a neat system for managing dev environments locally, it sounds great. As a boundary between mutually untrusted pieces of code, there are far better solutions, especially when the material difference in approaches amounts to a few seconds of your life at best, and somewhere south of 100mb in RAM.
[+] [-] midas007|12 years ago|reply
[+] [-] FireBeyond|12 years ago|reply
There’s near zero overhead, because there’s no virtualization.
[+] [-] zobzu|12 years ago|reply
also, its not really docker doing that, its LXC. Docker is an API around it.
[+] [-] optymizer|12 years ago|reply
We'd like to ship a set of utilities as a docker container, but unless the sysadmin gives everyone 'sudo' privileges on the server (unlikely and insecure), they can't run the container and its utilities.
Any advice?
[+] [-] shykes|12 years ago|reply
Future versions of the Docker API will natively support scoping. This means that each API client will see a different subset of the Docker engine depending on the credentials and origin of the connection. This will be implemented in combination with introspection, which allows any container to open a connection to the Docker engine which started it.
When you combine scoping and introspection, you get really cool scenarios. For example, let's say your utility is called "dockermeister". Each individual user could deploy his own copy of dockermeister, in a separate container. Each dockermeister container would in turn connect to Docker (via introspection), destroy all existing containers, and create 10 fresh redis containers (for reasons unknown). Because each dockermeister container is scoped, it can only remove containers that are its children (ie that were created from the same container at an earlier time). So they cannot affect each other. Likewise, the 10 new redis containers will only be visible to that particular user, and not pollute the namespace of the other users.
Of course scoping works at arbitrary depth levels... so you could have containers starting containers starting containers. Containers all the way down :)
[+] [-] kapilvt|12 years ago|reply
add silly disclaimer.. yes docker has some notion of portable containers plugins, but it uses lxc atm, and the feature is in lxc upstream.
[+] [-] davidcelis|12 years ago|reply
The official installation process seems more complicated, and I don't really see an advantage.
[+] [-] ak217|12 years ago|reply
[+] [-] tadfisher|12 years ago|reply
[+] [-] derefr|12 years ago|reply
This "native sandboxing for own-ABI if available, VM if not, and VM for everything else" approach would extend to any other platform as well, I'd think (Windows, for example.) I'm surprised that this isn't where Docker is going, at least for development and testing of containers.
(Though another alternative, probably more performant for production, would be something like having versions of CoreOS for each platform--CoreOS/Linux, CoreOS/Darwin, CoreOS/NT, and so on--so you'd have a cluster of machines with various ABIs, where any container you want to run gets shipped off to a machine in the cluster with the right ABI for it.)
[+] [-] steeve|12 years ago|reply
[1] https://github.com/steeve/boot2docker/releases/tag/v0.5.2
[+] [-] newman314|12 years ago|reply
[+] [-] wslh|12 years ago|reply
Not exactly the same but closer.
[+] [-] smtddr|12 years ago|reply
[+] [-] arc_of_descent|12 years ago|reply
[+] [-] sbt|12 years ago|reply
[+] [-] 1qaz2wsx3edc|12 years ago|reply
[+] [-] skywhopper|12 years ago|reply
So to me that doesn't fit in with the Semantic Versioning contract. I think the product is too young yet to use a version scheme that assumes relative API stability.
[+] [-] midas007|12 years ago|reply
Edit: prelim zfs driver work is here https://github.com/gurjeet/docker/tree/zfs_driver
[+] [-] nickstinemates|12 years ago|reply
Swappable storage engines will be easier to create over time, not less. There's also a ZFS branch, but the reality is people spent time and resources on getting BTRFS (which has been experimental for >6mos) instead of ZFS.
Docker development works a lot like Linux development (just on a much, much smaller scale.) If there's an area where you're comfortable committing, the barrier to entry is minimal. All you need is 2 maintainers to agree to your addition and its merged in. So get on it!
[+] [-] DannoHung|12 years ago|reply
[+] [-] garthk|12 years ago|reply
It's common and easy to mount your host FS into the container, putting mutating data where you can take full advantage of the superior architecture and ops capabilities of whichever FS you prefer.
The images' internal AUFS/BTRFS layers are then only for keeping your binaries-at-rest and static configuration straight. They may as well be in highly indexed ZIP files, for all you care.
[+] [-] naner|12 years ago|reply
[+] [-] tacticus|12 years ago|reply
[+] [-] morgante|12 years ago|reply
If anyone else is using Boxen, I packaged up a quick Puppet module to get up and running with Docker on OS X: https://github.com/morgante/puppet-docker
[+] [-] unknown|12 years ago|reply
[deleted]
[+] [-] andrewcooke|12 years ago|reply
[+] [-] shykes|12 years ago|reply
[+] [-] brunoqc|12 years ago|reply
[+] [-] jamtur01|12 years ago|reply
[+] [-] jokoon|12 years ago|reply
[+] [-] jamtur01|12 years ago|reply