I have a hunch as to what is going on here:
When docker is creating the network bridge that will be used for the virtual network, is uses the “gateway” option passed into the network config as the ip-address for a HOST virtual interface that is connected the new network by default.
Thus, when the network driver tries to setup the network for your vpn container, the address you wanted has already been used.
To see this in person, try creating your network by hand:
$docker network create \
--driver=bridge \
--subnet=172.20.0.0/16 \
--gateway=172.20.0.1 \
testnet
...
$docker network inspect testnet
[
{
"Name": "testnet",
"Id": "a742e6afc6c8d3ac1919d0ba4820686b06355af2ce1aa621b54144a5d6320fc9",
"Scope": "local",
"Driver": "bridge",
"IPAM": {
"Driver": "default",
"Options": {},
"Config": [
{
"Subnet": "172.20.0.0/16",
"Gateway": "172.20.0.1"
}
]
},
"Containers": {},
"Options": {}
}
]
Then immediately call ifconfig on the host:
$ifconfig
br-a742e6afc6c8 Link encap:Ethernet HWaddr 02:42:8e:0c:ad:30
inet addr:172.20.0.1 Bcast:0.0.0.0 Mask:255.255.0.0
UP BROADCAST MULTICAST MTU:1500 Metric:1
RX packets:0 errors:0 dropped:0 overruns:0 frame:0
TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes:0 (0.0 B) TX bytes:0 (0.0 B)
Unfortunately, I have yet to figure out how to work around this behavior in any sane way…
There are a couple of hacks you can use, such as manually creating the network bridge then removing the host interface, but they are all pretty kludgey.
There is also pipework, which is a script that streamlines the kludgey part, but I wouldn’t call it a good solution.
If anyone know of a better way to solve this, I would LOVE to know!