I'm also in the beta, and I like it a lot. It's functionally equivalent to `dlite`, which Nathan LaFreniere has done an extremely good job on. He deserves massive credit for making OSX Docker dev bearable and for providing the inspiration for "Docker for Mac".
A few issues I've seen:
1. I cannot believe they are using `docker.local`. This hostname will cause nothing but trouble for years to come. DON'T USE `.local`! Apple has decided that `.local` belongs to Bonjour, and due to a longstanding bug with their IPv6 integration, you can expect to see a 5-10s random delay in your applications as Bonjour searches your local network to try to resolve `docker.local`. Yeah, you put it in your `/etc/hosts`? Doesn't matter. Still screws up. Use `docker.dev` or `local.docker`. [http://superuser.com/questions/370559/10-second-delay-for-lo...]
2. -beta8 is screwed up. It won't bind to its local ip anymore. The only option is to port forward from localhost. Unfortunately, Docker isn't offering a download of beta7. Thankfully, I still had the DMG around.
3. The polish is still lacking. Most menu bar items ask you to open up something else.
4. Why "Docker for Mac"? Couldn't the team think of a less confusing name? Now I have "Docker" running "docker".
Otherwise - great projects, and again, much credit to @nlf for `dlite`. If you're not part of the beta, check out dlite (https://github.com/nlf/dlite). It's at least as good as Docker for Mac.
> I cannot believe they are using `docker.local`. This hostname will cause nothing but trouble for years to come.
We are indeed moving away from `docker.local` in Docker for Mac. There have actually been two networking modes in there since the early betas: the first one uses the OSX vmnet framework to give your container a bridged DHCP lease ('nat' mode), and the second one dynamically translates Linux container traffic into OSX socket calls ('hostnet' or VPN compatibility mode).
Try to give hostnet mode a try by selecting "VPN compatibility" from the UI. This will bind containers to `localhost` on your Mac instead of `docker.local` and also let you publish your ports to the external network. One of our design goals has been to run Docker for Mac as sandboxed as possible, and so we cannot just modify the /etc/resolv.conf to introduce new system domains such as ".dev".
We've been iterating on the networking modes in the early betas to get this right, so beta9 should hopefully strike a good balance with its defaults. It's also why we've been holding a private beta, so that we can make these kinds of changes without disrupting huge numbers of users' workflows. Your feedback as we figure it out is very much appreciated!
I do not work on this project, but perhaps this was the move to have the "VPN compatibility mode" enabled by default.
Have you tried going to the settings menu and disabling?
I just installed docker for mac after getting an invite in this thread, and this is the _only_ issue I'm facing. If I access the IP, it works extremely well. But if I use `docker.local`, then it takes about 5-6 seconds to connect, most of that in name resolution.
Same experience here, both on Mac and Windows. They've done a great job making it "just work". The user interface pieces are a bit raw -- perhaps "minimalist" or "unobtrusive" would put that in a better light! -- but clearly most of the work has gone into the lower level integration, where it shines.
Docker for Mac/Windows, once released, will nuke the ick factor on those platforms from orbit, which can only lead to even more adoption.
I hope there's going to be an easy way to package this with your own docker image in order to have a new way to distribute applications. My usecase is running a server locally so you can use a webapp with local network speed and offline access and lots of local storage.
I've noticed some pretty extreme performance penalties with Docker for Mac. Wherein VirtualBox would spin <60% CPU idling a bunch of services (MySQL, RabbitMQ, Redis, Elasticsearch, Memcached, several Python daemons) - Docker for Mac's driver hovers around 100% (spiking often to 200/300%) with another 20-30% (spiking to 50-80%) on the osxfs.
I'm going to guess it'll get better in time. It would be nice to get some insight into just what is burning CPU cycles. The experience besides that was really top notch IMO.
The early betas focussed on feature completeness rather than performance for filesystem sharing. In particular, we have implemented a new "osxfs" that implements bidirectional translation between Linux and OSX filesystems, including inotify/FSEvents and uid/guid mapping between the host and the container. Getting the semantics right took a while, and all the recent betas have been steadily gaining in performance as we implement more optimisations in the data paths.
If you do spot any pathological "spinning cases" where a particular container operation appears to spiking the CPU more than it should be in, we'd like to know about it so we can fix it. Reproducible Dockerfiles on the Hub are particularly appreciated so that we can add them to the regression tests.
Our approach is to focus on functionality and correctness first, and then improve performance over time.
We're building up a suite of performance benchmarks to help us track progress-- are there particular benchmarks that you would recommend we add? I'll certainly add "CPU load while idling" to the list.
Oh, really? I've been running boot2docker-xhyve for months now, and performance-wise it's been far better than VirtualBox. I wonder if its a quirk of how Docker for Mac is set up?
Sounds promising. But I'd like to see Docker work with Microsoft to produce something even better for Windows, using the new Windows Subsystem for Linux (WSL). With WSL, Docker and Microsoft should be able to bring Linux-based Docker containers to Windows, without the performance hit and resource fragmentation that inevitably come with virtualization. True, WSL doesn't support namespaces and cgroups, but IIUC, Windows itself has equivalent features. So the Docker daemon would run under Windows, and would use a native Windows API to create containers, each of which would use a separate WSL environment to run Linux binaries. I don't know how layered images would be supported; Microsoft might have to implement a union filesystem.
Native Docker support in Windows will be available in the next Windows server release, and is already available in the Windows technical preview bits (since TP4).
What is the use case though? What would be even better is if MS created a "windows container" that could run under Linux, then you could just ditch windows all together.
I don't see big companies using something this hackish for containers that are running on servers anyway. For working on the desktop this might come in handy for devs, but honestly I think MS should focus their energy on something else.
Why do people value that so much? I really don't care if a tiny VMis running in the background.
Also, running that VM gives me more confidence that it will also run on the production machine (since they use the same kernel and the same docker version).
The only problem I had with docker was that I did not use to support shared volumes that are outside the home folder on Mac (I think they changed that now, but I'm not sure).
There is still a tiny VM running. This one happens to be the Native OS X Hypervisor Framework. From the docs:
> Hypervisor (Hypervisor.framework). The Hypervisor framework allows virtualization vendors to build virtualization solutions on top of OS X without needing to deploy third-party kernel extensions (KEXTs). Included is a lightweight hypervisor that enables virtualization of the host CPUs.
I've had a great run with VirtualBox, between Vagrant and Docker Machine. But I can't lie, I won't miss its installer, uninstaller, OS X kernel extensions, questionable network file sharing, and more. Removing a big blob of software between me and my virtualization-ready CPU is progress.
Then Docker for Mac is the one-two punch. Simpler virtualization, extremely rich containerization.
If you want to run services in docker containers with Docker Toolbox (e.g. a mysql db), and you want the db stored on the Mac host, then you have to worry about 2 layers of folder mounts (one from host -> vm, one from vm -> container), another 2 layers of port forwarding (same as above), to make it 'feel' like your're running mysql locally.
With the beta, all of that is taken care for you with a couple of settings, and it's just much simpler to get up and running.
VirtualBox and VMWare are hardly "tiny." I've never had a good experience with VirtualBox on any platform. Thing is constantly broken and endlessly updated to break in newer, less Google-able ways and just causes never-ending grief in unexpected places for me.
I've been using the Mac Beta for a few weeks and I can also say it's great. Install is easy and it just works. It's such a relief being able to do dev work directly on my machine without docker-machine/VirtualBox. I've been hitting it with a variety of Ubuntu-based containers without any issues.
Author here. I was using that prior to getting in the beta. Tremendous work went into that driver, so I'm happy to see the techniques get picked up elsewhere.
The touted "native" is not what it is all cracked up to be. Maybe windows is a plus that brings a few souls into the fold, but I've been looking for OSX performance ratings and only found some comments here and there that are like my experience.
At my El Capitano, the exact same setup in Docker Beta takes roughly ten times to do its thing than my more flexible vbox setup did. A java stack (Jenkins) starts in about 1.5 minutes, but with Docker Beta it takes 15 minutes or about!
So, my docker-machine setup lets me see my hosts with vbox, manage them with docker-machine, and get the NFS tweaked with docker-machine-nfs. boot2docker OS is nice and small and works.
So for me this is quite a contrast with the 'native' Alpine images based Beta. Which in my 5-hour stint with it did not show much way to overview or inspect it without getting new/more gear.
I have Docker for Windows Beta, but when I've installed it on my Surface Pro 3, it immediately caused the device to get stuck in a BSOD loop. I think it has something to do with Hyper-V and connnected standby but I'm not 100% sure. Wasn't able to find an answer because it's so early on. I really want to get into Docker, but that bug has killed any possibility of me adopting it as of right now. I did install it on a desktop (which I lightly use) and it worked fine. With the new Windows 10 Insider build on that desktop though, Docker constantly is asking permission to run.
Anyhow, I really hope someone does a good overview of Docker for Windows beta, as well as the Ubuntu environment within Windows 10 now...Seems like OSX gets all of the dev love, so I'm wish and hoping for a really nice Windows overview. As I am currently having a hard time with both. Neither, as of right now, work well.
if you want a more technical review of the Windows beta, read this: http://docker-saigon.github.io/post/Docker-Beta/
But note that beta 8 was released a few days after that review and already introduced some changes.
Also, for the Windows beta, it very much is still a beta.
I'm sorry for your laptop experience and glad you got it working on your desktop. There are good chances it is related to Hyper-V but would need more info to debug. Could you send us your logs to [email protected]?
Docker for Windows still requires elevated privileges to start. This will be addressed in a couple of releases
I started playing around with Docker for Mac in an attempt to get my whole dev environment set up in Docker. It was really slick, especially being (re-)introduced to docker-compose which makes connecting containers very easy.
There is a ton of potential there. My biggest challenge is that the documentation hasn't quite caught up to all of the interesting stuff that is going on. I'd certainly welcome some more opinionated answers for how to develop on Docker. Specifically: how to not run apps as root, as almost all examples use root and permissions are annoying if you don't do so; how to use docker containers for both dev and prod; best practices for getting ssh key access into a container during the build phase.
But much of it Just Works at this point, I'm pretty confident that the best practices will catch up in time.
I'm a node.js developer. I understand the benefit of using docker for deployments or CI testing, but I have yet to be convinced of the benefits of using it for development on my local machine.
I install node, postgres, and redis natively and it all works fine. What benefits does docker provide to my workflow?
>I install node, postgres, and redis natively and it all works fine. What benefits does docker provide to my workflow?
Isn't it obvious?
With docker (or vagrant, or at least a VM etc) you can have the SAME environment as the deployment one. If you run OS X or Windows your direct local installs with differ in numerous ways to your deployment. And same if you run Linux but not the same distro or the same release.
And that's just the start.
Who said you'd be working in only one deployment/app at the time? If you need two different environments -- it could be even while working on version 2.0 of the same web app with new technologies--, e.g. one with Node 4 and one with Node 5, or a different postgres version, etc, you suddenly have to juggle all of these in your desktop OS.
Now you need to add custom ways to switch between them (e.g. can't have 2 postgres running on the same port at the same time), some will be incompatible to install together etc.
Without a vm/docker you also don't have snapshots (stored versions of the whole system installed, configured, and "frozen")...
Having dev servers set up the same as production makes sure that none of the little gotchas pop up that can cause problems. You can more readily guarantee that the version of every part of the stack is the same, and that the configurations are the same. One of the things this lets you do is work deeper in the stack without nearly as many concerns. You can test config tweaks, hand-rolled builds, etc, with knowing that a rollback is just an rm -rf and untar away, or a finalized config change is expressed as a single diff.
I run a docker-compose file and never, ever have to install node/postgres/redis myself, nor make sure I'm using the right version or have the right configuration files.
I pass the repo, with the Dockerfile and docker-compose.yml over to another developer and they do the same thing. They don't spend hours getting node/postgres/redis/whatever set up then fight environment issues to match what my, or staging, or our production environment are.
installing them natively is clunky, they are managed in all different ways depending on how you install them. They need to started and stopped manually. You also are turning your machine into a "snowflake" with unique combinations of os and service versions.
With docker, you can make a dockerfile for your project and make it painless and consistent to run anywhere. You can also create a docker-compose file if you need other services like redis. It really is the holy-grail once it clicks.
Do you put your applications in production yourself, or do you have an operations team which takes care of it for you?
If you develop everything using Docker, running the containers on another (Linux) computer is much easier as everything is already prepared and ready to bundle up and deploy.
If you develop on Linux and deploy to Linux, don't you feel you will catch kernel and other OS-specific issues much faster? i.e. Before they become a problem in production?
Wait, not even another kind of VM? Have you ever had to work with a team that has an unreliable environment? And had to walk them through debugging an error message for installing one of them, or wanted to add something with further dependencies?
(Not to say Docker's immune from that; the sudden deprecation of docker-compose for docker-machine was a nasty surprise.)
when you have 15 of those things start to make sense. I used vagrant in school just so that I wouldn't have any lasting tweaks of db's and weird things you end up doing. Also, with a provisioning script, I can get my projects running to this day. My snobol, smalltalk and scheme projects all can be run by just running vagrant up. I don't have to make sure that my current machine has all of the dependencies.
When we developed an angular and java site, I set up vagrant to configure tomcat, node, java, and all of the plugins required to get tomcat and maven to be nice together. Did it once, and then everyone else with a unixy platform were able to not spend time on dealing with that. Now that the class is over, all of that is removed from my machine but I can always just crank it back up in the time it takes to install all of those dependencies.
For anyone wanting to use all this cool stuff without waiting for the release, check out nlf/dlite https://github.com/nlf/dlite which has the xhyve implementation already.
VMWare Fusion does a few extremely useful things that Docker doesn't - for instance it can hook into an existing Boot Camp install and load it as a VM. It may be a bit heavy for the (Linux) applications Docker excels at, but for me it's worth the money just for the Windows VM support.
Anybody know if there is a full guide for the migration from toolbox to Mac beta? I've installed the beta, but I'm wondering if there's old cruft that I'll need to uninstall to be completely on the new.
I signed up for the Beta, but I have not gotten access yet. I was hoping to see a walk through of an example on your review so I could gauge how easy it is compared to the old docker setup on osx.
So if I'm using dlite now, and I want to transition to Docker for Mac once I get into the Beta...what do I need to do? Fully uninstall dlite? Can they be run side by side? (assuming no)
I am not sure if they conflict, there may be an issue with them both trying to use the same docker socket though, but you can probably just start one after you stop the other.
Good review. One thing mentioned is that the author was able to remove Kitematic amongst other things. Kitematic is a GUI for Docker. There is currently no replacement for it.
I got an impression that this is not that useful for development due to very weak networking support.
For example I use a single docker installation in a VM to test several unrelated projects with all of them providing a web server on a port 80/443. I do not want to remap ports not to deviate from the production config. Instead I added several IP to the VM and exposed relevant containers on own IP addresses. Then for testing I use a custom /etc/hosts that overwrites production names with VM's IP addresses. This works very nicely.
But I do not see that something like this is possible with "Docker for Mac".
My only issues so far have been: 1) the docker.local issue on mac (as a fee others have mentioned) and 2) I still get some vpn issues with Cisco Anyconnect
[+] [-] STRML|10 years ago|reply
A few issues I've seen:
1. I cannot believe they are using `docker.local`. This hostname will cause nothing but trouble for years to come. DON'T USE `.local`! Apple has decided that `.local` belongs to Bonjour, and due to a longstanding bug with their IPv6 integration, you can expect to see a 5-10s random delay in your applications as Bonjour searches your local network to try to resolve `docker.local`. Yeah, you put it in your `/etc/hosts`? Doesn't matter. Still screws up. Use `docker.dev` or `local.docker`. [http://superuser.com/questions/370559/10-second-delay-for-lo...]
2. -beta8 is screwed up. It won't bind to its local ip anymore. The only option is to port forward from localhost. Unfortunately, Docker isn't offering a download of beta7. Thankfully, I still had the DMG around. 3. The polish is still lacking. Most menu bar items ask you to open up something else. 4. Why "Docker for Mac"? Couldn't the team think of a less confusing name? Now I have "Docker" running "docker".
Otherwise - great projects, and again, much credit to @nlf for `dlite`. If you're not part of the beta, check out dlite (https://github.com/nlf/dlite). It's at least as good as Docker for Mac.
[+] [-] avsm|10 years ago|reply
We are indeed moving away from `docker.local` in Docker for Mac. There have actually been two networking modes in there since the early betas: the first one uses the OSX vmnet framework to give your container a bridged DHCP lease ('nat' mode), and the second one dynamically translates Linux container traffic into OSX socket calls ('hostnet' or VPN compatibility mode).
Try to give hostnet mode a try by selecting "VPN compatibility" from the UI. This will bind containers to `localhost` on your Mac instead of `docker.local` and also let you publish your ports to the external network. One of our design goals has been to run Docker for Mac as sandboxed as possible, and so we cannot just modify the /etc/resolv.conf to introduce new system domains such as ".dev".
We've been iterating on the networking modes in the early betas to get this right, so beta9 should hopefully strike a good balance with its defaults. It's also why we've been holding a private beta, so that we can make these kinds of changes without disrupting huge numbers of users' workflows. Your feedback as we figure it out is very much appreciated!
[+] [-] lobster_johnson|10 years ago|reply
There are some reserved suffixes, including .localhost and .test: https://iyware.com/dont-use-dev-for-development/.
[+] [-] akshatpradhan|10 years ago|reply
[+] [-] cpuguy83|10 years ago|reply
[+] [-] unknown|10 years ago|reply
[deleted]
[+] [-] atonse|10 years ago|reply
[+] [-] jdub|10 years ago|reply
Docker for Mac/Windows, once released, will nuke the ick factor on those platforms from orbit, which can only lead to even more adoption.
[+] [-] m_mueller|10 years ago|reply
[+] [-] bryanh|10 years ago|reply
I'm going to guess it'll get better in time. It would be nice to get some insight into just what is burning CPU cycles. The experience besides that was really top notch IMO.
[+] [-] avsm|10 years ago|reply
The early betas focussed on feature completeness rather than performance for filesystem sharing. In particular, we have implemented a new "osxfs" that implements bidirectional translation between Linux and OSX filesystems, including inotify/FSEvents and uid/guid mapping between the host and the container. Getting the semantics right took a while, and all the recent betas have been steadily gaining in performance as we implement more optimisations in the data paths.
If you do spot any pathological "spinning cases" where a particular container operation appears to spiking the CPU more than it should be in, we'd like to know about it so we can fix it. Reproducible Dockerfiles on the Hub are particularly appreciated so that we can add them to the regression tests.
[+] [-] djs55|10 years ago|reply
Our approach is to focus on functionality and correctness first, and then improve performance over time.
We're building up a suite of performance benchmarks to help us track progress-- are there particular benchmarks that you would recommend we add? I'll certainly add "CPU load while idling" to the list.
[+] [-] girvo|10 years ago|reply
[+] [-] kawera|10 years ago|reply
[+] [-] mwcampbell|10 years ago|reply
[+] [-] cpuguy83|10 years ago|reply
Native Docker support in Windows will be available in the next Windows server release, and is already available in the Windows technical preview bits (since TP4).
[+] [-] zxcvcxz|10 years ago|reply
I don't see big companies using something this hackish for containers that are running on servers anyway. For working on the desktop this might come in handy for devs, but honestly I think MS should focus their energy on something else.
[+] [-] Matt3o12_|10 years ago|reply
The only problem I had with docker was that I did not use to support shared volumes that are outside the home folder on Mac (I think they changed that now, but I'm not sure).
[+] [-] nzoschke|10 years ago|reply
There is still a tiny VM running. This one happens to be the Native OS X Hypervisor Framework. From the docs:
> Hypervisor (Hypervisor.framework). The Hypervisor framework allows virtualization vendors to build virtualization solutions on top of OS X without needing to deploy third-party kernel extensions (KEXTs). Included is a lightweight hypervisor that enables virtualization of the host CPUs.
https://developer.apple.com/library/mac/releasenotes/MacOSX/...
I've had a great run with VirtualBox, between Vagrant and Docker Machine. But I can't lie, I won't miss its installer, uninstaller, OS X kernel extensions, questionable network file sharing, and more. Removing a big blob of software between me and my virtualization-ready CPU is progress.
Then Docker for Mac is the one-two punch. Simpler virtualization, extremely rich containerization.
[+] [-] garadox|10 years ago|reply
With the beta, all of that is taken care for you with a couple of settings, and it's just much simpler to get up and running.
[+] [-] tacos|10 years ago|reply
[+] [-] xienze|10 years ago|reply
[+] [-] rzimmerman|10 years ago|reply
[+] [-] viglesiasce|10 years ago|reply
[+] [-] nzoschke|10 years ago|reply
[+] [-] dotmpe|9 years ago|reply
At my El Capitano, the exact same setup in Docker Beta takes roughly ten times to do its thing than my more flexible vbox setup did. A java stack (Jenkins) starts in about 1.5 minutes, but with Docker Beta it takes 15 minutes or about!
So, my docker-machine setup lets me see my hosts with vbox, manage them with docker-machine, and get the NFS tweaked with docker-machine-nfs. boot2docker OS is nice and small and works.
So for me this is quite a contrast with the 'native' Alpine images based Beta. Which in my 5-hour stint with it did not show much way to overview or inspect it without getting new/more gear.
[+] [-] partiallypro|10 years ago|reply
Anyhow, I really hope someone does a good overview of Docker for Windows beta, as well as the Ubuntu environment within Windows 10 now...Seems like OSX gets all of the dev love, so I'm wish and hoping for a really nice Windows overview. As I am currently having a hard time with both. Neither, as of right now, work well.
[+] [-] so0k|10 years ago|reply
[+] [-] michaK|10 years ago|reply
[+] [-] mnutt|10 years ago|reply
There is a ton of potential there. My biggest challenge is that the documentation hasn't quite caught up to all of the interesting stuff that is going on. I'd certainly welcome some more opinionated answers for how to develop on Docker. Specifically: how to not run apps as root, as almost all examples use root and permissions are annoying if you don't do so; how to use docker containers for both dev and prod; best practices for getting ssh key access into a container during the build phase.
But much of it Just Works at this point, I'm pretty confident that the best practices will catch up in time.
[+] [-] Osiris|10 years ago|reply
I install node, postgres, and redis natively and it all works fine. What benefits does docker provide to my workflow?
[+] [-] coldtea|10 years ago|reply
Isn't it obvious?
With docker (or vagrant, or at least a VM etc) you can have the SAME environment as the deployment one. If you run OS X or Windows your direct local installs with differ in numerous ways to your deployment. And same if you run Linux but not the same distro or the same release.
And that's just the start.
Who said you'd be working in only one deployment/app at the time? If you need two different environments -- it could be even while working on version 2.0 of the same web app with new technologies--, e.g. one with Node 4 and one with Node 5, or a different postgres version, etc, you suddenly have to juggle all of these in your desktop OS.
Now you need to add custom ways to switch between them (e.g. can't have 2 postgres running on the same port at the same time), some will be incompatible to install together etc.
Without a vm/docker you also don't have snapshots (stored versions of the whole system installed, configured, and "frozen")...
[+] [-] Sanddancer|10 years ago|reply
[+] [-] swozey|10 years ago|reply
I pass the repo, with the Dockerfile and docker-compose.yml over to another developer and they do the same thing. They don't spend hours getting node/postgres/redis/whatever set up then fight environment issues to match what my, or staging, or our production environment are.
[+] [-] accounthere|10 years ago|reply
[+] [-] moondev|10 years ago|reply
With docker, you can make a dockerfile for your project and make it painless and consistent to run anywhere. You can also create a docker-compose file if you need other services like redis. It really is the holy-grail once it clicks.
[+] [-] zenlikethat|10 years ago|reply
If you develop everything using Docker, running the containers on another (Linux) computer is much easier as everything is already prepared and ready to bundle up and deploy.
If you develop on Linux and deploy to Linux, don't you feel you will catch kernel and other OS-specific issues much faster? i.e. Before they become a problem in production?
[+] [-] SilasX|10 years ago|reply
(Not to say Docker's immune from that; the sudden deprecation of docker-compose for docker-machine was a nasty surprise.)
[+] [-] ambulancechaser|10 years ago|reply
When we developed an angular and java site, I set up vagrant to configure tomcat, node, java, and all of the plugins required to get tomcat and maven to be nice together. Did it once, and then everyone else with a unixy platform were able to not spend time on dealing with that. Now that the class is over, all of that is removed from my machine but I can always just crank it back up in the time it takes to install all of those dependencies.
[+] [-] beaker52|10 years ago|reply
[+] [-] joeblau|10 years ago|reply
[+] [-] bobwaycott|10 years ago|reply
[+] [-] joshvm|10 years ago|reply
[+] [-] vhiremath4|10 years ago|reply
[+] [-] tmaly|10 years ago|reply
[+] [-] dchuk|10 years ago|reply
[+] [-] justincormack|10 years ago|reply
[+] [-] justinhj|10 years ago|reply
[+] [-] _0w8t|10 years ago|reply
For example I use a single docker installation in a VM to test several unrelated projects with all of them providing a web server on a port 80/443. I do not want to remap ports not to deviate from the production config. Instead I added several IP to the VM and exposed relevant containers on own IP addresses. Then for testing I use a custom /etc/hosts that overwrites production names with VM's IP addresses. This works very nicely.
But I do not see that something like this is possible with "Docker for Mac".
[+] [-] nerdwaller|10 years ago|reply