The last time I used xhyve, it kernel panic'ed my mac. Researching this on the xhyve github account [1] showed that it was determined that it's due to a bug with Virtualbox. That is, if you've started a virtual machine since your last reboot with Virtualbox, subsequent starts of xhyve panic.
So, buyer beware, especially if said buyer also uses tools like Vagrant.
I've said before that I think the Docker devs have been iterating too fast, favoring features over stability. This development doesn't ease my mind on that point.
EDIT: I'd appreciate feedback on downvotes. Has the issue been addressed, but not reflected in the tickets? Has Docker made changes to xhyve to address the kernel panics?
Thanks, this is useful feedback. There are various workarounds in the app to prevent such things, but the purpose of the beta program is to ensure that we catch all the weird permutations that happen when using hardware virt (e.g. the Android emulator).
If anyone sees any host panics ever, we'd like to know about it ([email protected]) and fix it in Docker for Mac and Windows. Fixes range from hypervisor patches to simply doing launch-time detection of CPU state and refusing to run if a dangerous system condition exists.
Yes, I think this issue has been addressed for a while – it was solved in a release of Virtualbox. I'm sure that 5.0+ doesn't have the conflict with xhyve.
If I had a yearly quota on HN for upvotes, I'd use all of them on this.
> Volume mounting for your code and data: volume data access works correctly, including file change notifications (on Mac inotify now works seamlessly inside containers for volume mounted directories). This enables edit/test cycles for “in container” development.
This (filesystem notifications) was one of the major drawbacks for using Docker on Mac for development and a long time prayer to development god before sleep. I managed to get it working with Dinghy (https://github.com/codekitchen/dinghy) but it still felt like a hack.
Can someone explain in simple terms how Docker for Windows is different from Application Virtualization products like VMware ThinApp, Microsoft App-V, Spoon, Cameyo, etc? Also, why does it require Hyper-V activated in Windows 10? I found this: https://docs.docker.com/machine/overview/ but I don't understand if you need separate VMs for separate configurations or they have a containerization technology where you are able to run isolated applications on the same computer.
Thanks for exposing me to ThinApp and the rest. I took a quick look, these are Microsoft based technologies designed to run Windows apps, however conceptually I don't see much difference.
Docker is a containerization standard that relies on various Linux capabilities to isolate application runtimes (or containers if you will). On Mac and Linux it used to be achieved by running a small Linux VM in VirtualBox, but looks like this release has brought xhyve on, which is supposed to have an even smaller foot print.
Docker uses LXC containers. In Linux, these aren't VMs and are light weight user-land separations that use things like cgroups and lots of really special kernel modules for security.
Unfortunately, this means Docker only runs on Linux .. not even Linux...special Docker Kernel Linux (all the features they need are in the stock Kernel tree, but it's still a lot of modules). In Windows/Mac, you still need to run in a virtual machine.
Even with this update...you still need to run in a virtual machine. It's not actually running Docker natively. It can't, even on Mac which has a (not really) *NIX-sh base. You have to then use the docker0 network interface to connect to all your docker containers.
In Linux, you can just go to localhost. I _think_ FreeBSD has native Docker support with some custom kernel modules. I'm not sure...I've only looked at the Readme. I haven't tried it.
So even in Windows/Mac, all your containers do run in one VM (where as with traditional stuff you mentioned, you'd need a VM for each thing). Docker containers are meant to handle one application (that it runs as root within its container as the init process ... cause wtf?). With VMs, you'd typically want some type of configuration management (Puppet, Ansible, Chef, etc.) that sets up apps on each VM/server. With Docker, each app should be its own container and you link the containers together using things like Docker compose or running them on CoreOS or Mesos.
In my work with Docker, I'm not sure how I feel. LXC containers have had a lot of security issues. Right now, Docker doesn't have any blaring security holes and LXC has increased security quite a bit. CoreOS is pretty neat and I wouldn't use docker in production without it or another container manager (the docker command by itself still cannot prune unused images. After a while you get a shit ton of images that just waste space you're not using. CoreOS prunes these at regular intervals. A docker command to do this is still a Github issue. Writing one yourself with docker-py is horribly difficult because of image dependencies).
Oh and images. Docker uses images to build things up like building blocks. That's a whole thing I don't want to go into, but look it up. It's actually kind of interesting and allows for base image updates to fix security issues (although you still need to rebuild your containers against the new images ... I think...I haven't looked into that yet).
Docker is ... interesting. I find it lazy in some ways. I think it's better to build packages (rpms, debs). FPM makes this really easy now. Combine packages with a configuration management solution (haha..yea they all suck. Puppet, Ansible, CFEngine...they're different levels of horrible. Ansible so far has pissed me off the least) and you can have a pretty solid deployment system. In this sense, Docker does kinda make more sense than handling packages. You throw your containers on CoreOS/Mesos and use Consul for environment variables and you can have a pretty smooth system.
I dunno. I'm trying to actually like Docker. I've only made fun of it in the past, but now I work for a shop that uses it in production. O_o
This is an amazing announcement, but... The beta requires a NDA. The source code is also not available. This gives the impression that this will be a closed commercial product and that really takes the wind out of my sails.
From the blog post: "Many of the OS-level integration innovations will be open sourced to the Docker community when these products are made generally available later this year."
We have been working on hypervisor.framework for more than 6 months now, since it came out to develop our native virtualization for OS X, http://www.veertu.com As a result, we are able to distribute Veertu through the App Store. It’s the engine for “Fast” virtualization on OS X. And, we see now that docker is using it for containers. We wish that Apple would speed up the process of adding new Apis in this hypervisor.framework to support things like bridge networking, USB support, so everything can be done in a sandboxed fashion, without having to develop kernel drivers. I am sure docker folks have built their kernel drivers on top of xhyve framework.
If you're using docker on mac, you're probably not using it there for easy scaling (which was the reason docker was created back then), but for the "it just works" feeling when using your development environment. But docker introduces far too much incidental complexity compared to simply using a good package manager. A good package manager can deliver the same "it just works" feeling of docker while being far more lightweight.
I'm a Docker n00b, still don't know what it can do exactly. Can Docker replace Virtualbox? I guess only for Linux apps, and suppose it won't provide a GUI, won't run Windows to use Photoshop?!
Very excited about this. Docker Machine and VirtualBox can be a rough experience.
> Many of the OS-level integration innovations will be open sourced to the Docker community when these products are made generally available later this year.
I found docker-machine and VirtualBox quite stable (running multiple Flask, Python, and PostgreSQL containers). The only major issue I had was from a 5 year old VirtualBox bug and sendFile. That said, I won't miss the extra steps of running eval docker-machine etc.
I'm delighted to read that inotify will work with this. How's fs performance? Running elasticsearch or just about any compile process in a docker-machine-based container is fairly painful.
This is v.cool, although for the Windows version it'd be great if it became possible to swap out the virtualization back-end so it's not tied to Hyper-V.
At the moment VMWare Workstation users will be a bit left out as Windows doesn't like having two hypervisors installed on the same system...
Does anybody have any guides on setting up dev environments for code within Docker? I recall a Dockercon talk last year from Lyft about spinning up microservices locally using Docker.
We're using Vagrant for development environments, and as the number of microservices grows - the feasibility of running the production stack locally decreases. I'd be interested in learning how to spin up five to ten docker services locally on OSX for service-oriented architecture.
Tried to sign up, but the enroll form at https://beta.docker.com/form is blank for me - it just says "Great! We just need a little more info:" but has no forms.
Hi folks, we had an unexpected issue while were pushing an update to site (removing the NDA requirement). It should be fixed now and you can sign up as usual. If you're using something like ghostery, you may need to pause it for this site as we using Marketo to deal with sign ups.
#2 on hacker news and no one can sign up. Bummer. I've tried 3 different browsers; Chrome, firefox, safari. Same problem. Both Firefox and Safari are completely uncustomized.
I'm really excited to see this because I've spent the last few months experimenting with Docker to see if it's a viable alternative to Vagrant.
I work for a web agency and currently, our engineers use customized Vagrant boxes for each of the projects that they work on. But that workflow doesn't scale and it's difficult to maintain a base box and all of the per project derivatives. This is why Docker seems like a no-brainer for us.
However, it became very clear that we would have to implement our own tooling to make a similar environment. Things like resolving friendly domain names (project-foo.local or project-bar.local) and adding in a reverse proxy to have multiple projects use port 80.
Docker for Mac looks like it will solve at least the DNS issue.
Nathan LaFreniere (the author of dlite) is awesome, and we've been exchanging tips and tricks and areas where we can collaborate. He knew exactly where to press to find bugs in our earlier betas...
Yes, I've been using this for some time too. It's pretty great, totally recommended to anyone who is fed up with docker-machine or docker-compose or whatever random tool is currently required.
My goodness. This is some of the best news from docker this year and we are still just getting started. Packaging various hot reloading JavaScript apps will finally be possible. Gosh. I can't begin to say just how excited I am for this.
[+] [-] falcolas|10 years ago|reply
So, buyer beware, especially if said buyer also uses tools like Vagrant.
[1] https://github.com/mist64/xhyve/issues/5
I've said before that I think the Docker devs have been iterating too fast, favoring features over stability. This development doesn't ease my mind on that point.
EDIT: I'd appreciate feedback on downvotes. Has the issue been addressed, but not reflected in the tickets? Has Docker made changes to xhyve to address the kernel panics?
[+] [-] avsm|10 years ago|reply
If anyone sees any host panics ever, we'd like to know about it ([email protected]) and fix it in Docker for Mac and Windows. Fixes range from hypervisor patches to simply doing launch-time detection of CPU state and refusing to run if a dangerous system condition exists.
[+] [-] vog|10 years ago|reply
Is this type of comment discouraged on HN? If so, why?
[+] [-] matthewmacleod|10 years ago|reply
[+] [-] tzaman|10 years ago|reply
> Volume mounting for your code and data: volume data access works correctly, including file change notifications (on Mac inotify now works seamlessly inside containers for volume mounted directories). This enables edit/test cycles for “in container” development.
This (filesystem notifications) was one of the major drawbacks for using Docker on Mac for development and a long time prayer to development god before sleep. I managed to get it working with Dinghy (https://github.com/codekitchen/dinghy) but it still felt like a hack.
[+] [-] wslh|10 years ago|reply
[+] [-] tra3|10 years ago|reply
Docker is a containerization standard that relies on various Linux capabilities to isolate application runtimes (or containers if you will). On Mac and Linux it used to be achieved by running a small Linux VM in VirtualBox, but looks like this release has brought xhyve on, which is supposed to have an even smaller foot print.
HTH.
[+] [-] djsumdog|10 years ago|reply
Unfortunately, this means Docker only runs on Linux .. not even Linux...special Docker Kernel Linux (all the features they need are in the stock Kernel tree, but it's still a lot of modules). In Windows/Mac, you still need to run in a virtual machine.
Even with this update...you still need to run in a virtual machine. It's not actually running Docker natively. It can't, even on Mac which has a (not really) *NIX-sh base. You have to then use the docker0 network interface to connect to all your docker containers.
In Linux, you can just go to localhost. I _think_ FreeBSD has native Docker support with some custom kernel modules. I'm not sure...I've only looked at the Readme. I haven't tried it.
So even in Windows/Mac, all your containers do run in one VM (where as with traditional stuff you mentioned, you'd need a VM for each thing). Docker containers are meant to handle one application (that it runs as root within its container as the init process ... cause wtf?). With VMs, you'd typically want some type of configuration management (Puppet, Ansible, Chef, etc.) that sets up apps on each VM/server. With Docker, each app should be its own container and you link the containers together using things like Docker compose or running them on CoreOS or Mesos.
In my work with Docker, I'm not sure how I feel. LXC containers have had a lot of security issues. Right now, Docker doesn't have any blaring security holes and LXC has increased security quite a bit. CoreOS is pretty neat and I wouldn't use docker in production without it or another container manager (the docker command by itself still cannot prune unused images. After a while you get a shit ton of images that just waste space you're not using. CoreOS prunes these at regular intervals. A docker command to do this is still a Github issue. Writing one yourself with docker-py is horribly difficult because of image dependencies).
Oh and images. Docker uses images to build things up like building blocks. That's a whole thing I don't want to go into, but look it up. It's actually kind of interesting and allows for base image updates to fix security issues (although you still need to rebuild your containers against the new images ... I think...I haven't looked into that yet).
Docker is ... interesting. I find it lazy in some ways. I think it's better to build packages (rpms, debs). FPM makes this really easy now. Combine packages with a configuration management solution (haha..yea they all suck. Puppet, Ansible, CFEngine...they're different levels of horrible. Ansible so far has pissed me off the least) and you can have a pretty solid deployment system. In this sense, Docker does kinda make more sense than handling packages. You throw your containers on CoreOS/Mesos and use Consul for environment variables and you can have a pretty smooth system.
I dunno. I'm trying to actually like Docker. I've only made fun of it in the past, but now I work for a shop that uses it in production. O_o
:-P
[+] [-] hathym|10 years ago|reply
[+] [-] darren0|10 years ago|reply
[+] [-] shykes|10 years ago|reply
We will open-source all the components individually, to make them easier to reuse elsewhere. That requires work to do properly.
Lastly, Docker for Mac and Docker for Windows will be free.
[+] [-] jdub|10 years ago|reply
[+] [-] otterley|10 years ago|reply
[+] [-] izik_e|10 years ago|reply
[+] [-] _query|10 years ago|reply
I've wrote a blog post about this topic a few months ago, check it out if you're interested in a simpler way of building development environments: https://www.mpscholten.de/docker/2016/01/27/you-are-most-lik...
[+] [-] rogeryu|10 years ago|reply
I'm a Docker n00b, still don't know what it can do exactly. Can Docker replace Virtualbox? I guess only for Linux apps, and suppose it won't provide a GUI, won't run Windows to use Photoshop?!
[+] [-] rocky1138|10 years ago|reply
I think they forgot about Linux :)
[+] [-] nzoschke|10 years ago|reply
> Many of the OS-level integration innovations will be open sourced to the Docker community when these products are made generally available later this year.
Does this mean it is closed right now?
[+] [-] knz|10 years ago|reply
[+] [-] mwcampbell|10 years ago|reply
https://news.ycombinator.com/item?id=11352594
I imagine a lot of this work will also be useful for developers wanting to test all sorts of unikernels on their Mac and Windows machines.
[+] [-] amirmc|10 years ago|reply
[+] [-] totallymike|10 years ago|reply
[+] [-] f4stjack|10 years ago|reply
[+] [-] raesene4|10 years ago|reply
At the moment VMWare Workstation users will be a bit left out as Windows doesn't like having two hypervisors installed on the same system...
[+] [-] philip1209|10 years ago|reply
We're using Vagrant for development environments, and as the number of microservices grows - the feasibility of running the production stack locally decreases. I'd be interested in learning how to spin up five to ten docker services locally on OSX for service-oriented architecture.
This product from Docker has strong potential.
[+] [-] Lambent_Cactus|10 years ago|reply
[+] [-] amirmc|10 years ago|reply
[+] [-] friism|10 years ago|reply
[+] [-] mchiang|10 years ago|reply
[+] [-] johnnylambada|10 years ago|reply
[+] [-] amirmc|10 years ago|reply
Would you mind trying again but allowing Marketo?
[+] [-] lewisl9029|10 years ago|reply
[+] [-] aryehof|10 years ago|reply
[+] [-] unknown|10 years ago|reply
[deleted]
[+] [-] jaequery|10 years ago|reply
[+] [-] bgruber|10 years ago|reply
[+] [-] evacchi|10 years ago|reply
[1] https://github.com/mist64/xhyve/issues/84
[+] [-] justincormack|10 years ago|reply
[+] [-] mathewpeterson|10 years ago|reply
I work for a web agency and currently, our engineers use customized Vagrant boxes for each of the projects that they work on. But that workflow doesn't scale and it's difficult to maintain a base box and all of the per project derivatives. This is why Docker seems like a no-brainer for us.
However, it became very clear that we would have to implement our own tooling to make a similar environment. Things like resolving friendly domain names (project-foo.local or project-bar.local) and adding in a reverse proxy to have multiple projects use port 80.
Docker for Mac looks like it will solve at least the DNS issue.
Can't wait to try it out.
edit: words
[+] [-] alexc05|10 years ago|reply
If I were a 12 year old girl I would be "squee-ing" right now. Ok, I'm lying - I'm a 40 year old man actively Squee-ing over this.
:)
It really plays nicely into my "weekend-project" plans to write a fully containerized architecture based in dotnet-core.
[+] [-] _mikz|10 years ago|reply
[+] [-] avsm|10 years ago|reply
Nathan LaFreniere (the author of dlite) is awesome, and we've been exchanging tips and tricks and areas where we can collaborate. He knew exactly where to press to find bugs in our earlier betas...
[+] [-] matthewmacleod|10 years ago|reply
[+] [-] nstart|10 years ago|reply