I have been using Docker for Windows on a Windows 10 64-bit machine and it has worked really well for me.
However, in my case, I have a pretty beefy Windows 10 64-bit development server which I use to run several development VMs. Development takes place on a weaker laptop which interacts with the VMs over the network.
Previously, I had a Ubuntu VM running the latest version of docker with bridged network enable. Effectively, that makes the VM look like a separate machine on my network with its own IP address to avoid port collisions with the other VMs.
With Docker for Windows, this does not seem to be possible (yet). I added an “external” hyper-v network, and added an extra network adapter to the MobyLinuxVM machine and connected it to the external network. However, when I attempt to start Docker for Windows, it does not start or shutdown properly and I have to force the machine to shutdown.
It would be nice if the Docker for Windows networking settings allows us to run the MobyLinuxVM in “bridged (VirtualBox term)” / “external (hyper-v terms)” mode.
If I understand the question, I think have a similar ask. Specifically I would like to have multiple notes on my laptop for development to do swarm related work locally without pushing to Azure or another cloud for that matter. I’m trying to execute the demo I\that was presented at DockerCon2016 using multiple nodes (node1, node2, node3) to create a cluster. I tried leveraging docker-machine to create new machines (hyperv) but to no avail.
thanks for your comments and suggestions. We used to configure a the MobyLinux VM with an internal switch and NAT but that caused a lot of problems on many users systems, so we replaced it with the current mode (callled hostnet or VPN mode) where connections from the VM are re-originated from the host.
We had also considered configuring an external switch as you suggest, but that as two issues:
Switching between wired and wireless network interfaces will not work easily. AFAIK, the external switch can only have external interface attached/associated with it so we would have to reconfigure the networking setup everytime the external connectivity of the machine/laptop changes.
We’d have to carefully consider exiting setups a user may have. They may had already configured an external switch for other VMs and Docker for Windows would have to play nicely with that. This quickly becomes pretty cumbersome as well.
Could you describe your use case in a bit more detail? How would you like either your development machine VMs connect to Docker for Windows or your Docker for windows connect to the VMs on yourr development machine?
My use case is actually super simple. I have a development server with a lot of RAM and a pretty beefy CPU. I want to install docker for windows and attach it to an “hyper-v external switch” so that the VM gets an ip address on my network.
Currently, as a work around, I created a Ubuntu machine connected to a “hyper-v external switch” and installed docker in it. This makes the machine look like a real physical machine on my network.
In terms of the development side, I use the docker client like so: docker ps -H=tcp://192.168.1.106:2375
However, I’ve built a small tool that will watch the project I am working on, send the changes to the docker server using the file copy api, and run tests and other tasks.
This development method has worked really well for me and I can do my work on a light laptop.
The current goal of Docker for Windows is to provide a really great out-of-the box experience for most users. For more complex setups like yours, I think the recommendation would be to user Docker Machine.
However, would it be possible to add an optional second interface where we can select which switch we want to connect that interface to using the Settings panel? That would pretty much solve my problem and perhaps help achieve a lot of use-cases.
In my case, on my development machine I have Hyper-V running with one external switch, so anybody in office can access demo VMs.
After setting up docker I do have mess, there is bridge, there is docker nat, but what is even worse: from now on I loose ability to non interactively use vagrant because it wants me to choose switch to use
I do understand that it is not your problem but will be very nice if you will give us choice what should be done with virtual switches
Indeed, I have the same need. I would like to expose a VM in the external network.
It would be enough to be able to create a transparent network or being able to choose a custom hyperV network adapter
I’m looking for the same capability here. If I’m deploying a containerized service, it’s pretty important that I’m able to access that over the network, and the inability to do so prevents the use of Docker for all but the most trivial of use cases. Is this even on the roadmap? External network connectivity seems like a basic feature and I’m surprised to find it missing.
Is there any update please?
What I need to do is multicast from docker container to host network, but until I will set external network on virtual switch, it will not work.
Problems I am dealing with:
When manually switched to external network, drive share is no more working because of firewall - which is turned off
When I restart machine, default settings are restored