I’m trying to create a docker windows network driver (nat) so that the windows docker containers attached to this subnet are available for a hyper-v vm using the same vswitch and subnet.
The problem is: although I can define a docker network driver and it is available in the hyper-v switch manager and I can also allocate ip addresses to both hyper-v vm and windows container, communication between them isn’t working.
What works: Hyper-V “Default Switch” that is an internal network switch works properly but I don’t know why. In docker network ls results it seems that it’s defined as an ics network driver but Microsoft doesn’t say anything about this kind of driver in its docs.
I would like to create a custom internal switch like the Default Switch.
What I’ve tried:
setting docker “bridge”: “none” option and default nat is not created
creating a network driver using an interface microsoft option to be accessible in hyper-v:
docker network create -d nat -o com.docker.network.windowsshim.interface=‘vEthernet (Docker)’ -o com.docker.network.windowsshim.networkname=“Docker” docker-NAT
Here, I tried more options: setting the driver to “ics” (whatever it is I’m not a mentalist and it’s not documented but works in Default Switch), setting the com.docker.network.windowsshim.interface to the name of a hyper-v defined internal adapter.
But pinging isn’t working so I’m confused and don’t have a clue.
Since I usually work on Linux, I suggested you to ask here where more people can see your question. Meanwhile I tried to learn more about Windows Container Networks so, but I wasn’t able to do what you want for days so I am going to share what I have learned. I know you know about most of these or all of these, but hopefully it will help someone to give you a better answer.
But first of all a questiion:
Do you really want to run containers in the same subnet as Hyper-V virtual machines or would it be enough if the containers could communicate with virtual machines in a different subnet?
What I have learned so far:
Here is a good general description about Windows container networks in microsoft’s documentation
There is a list of supported Docker network drivers on Windows in the following documentation:
Although the driver of the “Default Switch” is not documented herw as you already pointed out.
Do you know @vrapolinario why that could be?
So I opened the sourcecode of moby and found references to ICS, for example:
This is at least one clue to get closer to the reason why “ics” is different from other drivers.
In Docker containers you can use containers’ name as hostname instead of IP addresses. It looks like in case of “ics” network Docker would not lunch a DNS server. I didn’t learn much more about this so I could be completely wrong misinterpreting this small part of the sourcecode.
Then I found a reference to ICS in one of the libraries of Microsoft:
Then I stopped reading the sourcecodes.
I also found the “Advanced” part of the documentation useful
Some of the concepts are new to me, so I just tried to run the commands and see what happens. Tríing your command and some of the commands from the documentation I started to feel that runing Docker commands and using the Hyper-V GUI might not be enough so I tried some Powershell commands too.
I have noticed that the “Default Switch” doesn’t have an adapter or I couldn’t find it and I couldn’t learn more about it running the following commands PowerShell (v7):
Get-NetAdapter
Get-VMSwitch
Eventually I gave up, although it is possible that the above-mentioned posts and documentations are the key to the solution and I just didn’t read it carefully enough or didn’t understand due to the fact that my brain still works better with Linux and I couldn’t spend enough time on investigating the issue.
I think this is all I could share. Since I am interested in this issue and I would be glad to learn more about Windows containers in the future, if I find time to play with the network again, I will come back with more ideas.
Hey folks. Thanks for adding me. This is an interesting scenario: @n1gthlybu1ld why is it that you want the VMs and containers on the same virtual network on the same host? I have to test this, but the option you should use is the Transparent. NAT means you use WinNAT, which translate the address from the NAT network, binding it to a specific port. For example: An external client can access the port 80 of your container, via the port 8080 of the container host IP (http://:8080 → http://:80). Since I assume you want regular network access, you don’t want the NAT option. If you have a simple environment, the Transparent driver will do the job. If you need additional options, such as network isolation
for the containers between multiple hosts, you should use the Overlay option, which relies on VXLAN. I hope this helps.
Doesn’t transparent network work with physical interfaces only? If I understood it correctly, @n1gthlybu1ld would like to have an internal vswitch, but external vswitch worked with physical networks, although he didn’t mention it here and I forgot to mention it in my post too.
Yes, I need it only for test environment and I didn’t want to bind the network to physical interface. @vrapolinario are you sure that it doesn’t work with NAT networks? Actually I don’t need dhcp server but an internal communication. If it only works with transparent, overlay, then I accept that, no problem.
But here comes the other interesting thing: how this ics network driver works. In hyperv “Default Switch” is an internal vswitch and it’s also available in docker as an ics network driver, so that messed me up a little. What is this ics network driver exactly? As @rimelek mentioned before he found it in source code but nothing about it in the official docs.