Networking issues with VMWare in Docker in Docker

I’m trying to set up my Teamcity Server to run VMWare in a container, so that I can run Packer builds in my continuous integration environment.

If I run headless VMWare player inside a container it works well. The host needs the VMWare modules installed and the vmnet interfaces running, along with the VMWare DHCP server serving leases on those interfaces.

To run the container I have to use --privileged and --net=host so that VMWare can see the everything it needs. (Security is not an issue in this environment.) This works fine, and the guest VM can receive an IP over a NAT network (vmnet8) from the DHCP server running on the container host just fine. I can run my Packer builds and everything is peachy.

The problem turns up one step further down the line:

My TeamCity agents are run inside containers. The containers are started with --priveleged and --net=host as above, and can see the kernel modules, devices, network interfaces as expected. The packer containers then run inside the build agent container, as above, so that I have a privileged container with host networking running inside a privileged container with host networking.

It’s the same packer+vmware container in both cases.

In the latter case VMWare runs, but is unable to receive an IP address from the DHCP server. In all other respects the functionality is identical.

Does anyone here have an inkling as to what might be different about the second case than the first, such that it is unable to get the IP?