Docker Community Forums

Share and learn in the Docker community.

Containers accessible on LAN via ipv4 and ipv6


(Vermilionwizard) #1

I have a pretty vanilla install of Ubuntu Server 16.04 and with Docker CE on a machine. I am trying to get containers that will connect to my local network and be accessible by other computers (with a “bridged” network configuration, to borrow terminology from VirtualBox). I need ipv4 and ipv6 to both work on the containers: they need to be able to communicate with other containers and other machines on the network, and the containers should also be accessible via either ipv4 or ipv6 from the other machines on the network. (Except the docker host; I do not care about the containers with their host.)

I’ve tried a few different things to get this working, so far the closest I’ve come has been following this blog post using the macvlan driver: https://hicu.be/docker-networking-macvlan-bridge-mode-configuration

I created a network with the following command:

docker network create -d macvlan \
    --subnet=10.50.0.0/24 --gateway=10.50.0.1 \
    --subnet=2***:****:****:****:babe::/80 --gateway=2***:****:****:****:babe::1 \
    -o parent=ens2 \
    --ipv6 \
    lan

Here is the result of docker network inspect lan

[
    {
        "Name": "lan",
        "Id": "blah blah blah",
        "Created": "2018-02-26T17:37:26.716126104-08:00",
        "Scope": "local",
        "Driver": "macvlan",
        "EnableIPv6": true,
        "IPAM": {
            "Driver": "default",
            "Options": {},
            "Config": [
                {
                    "Subnet": "10.50.0.0/24",
                    "Gateway": "10.50.0.1"
                },
                {
                    "Subnet": "2***:****:****:****:babe::/80",
                    "Gateway": "2***:****:****:****:babe::1"
                }
            ]
        },
        "Internal": false,
        "Attachable": false,
        "Ingress": false,
        "ConfigFrom": {
            "Network": ""
        },
        "ConfigOnly": false,
        "Containers": {
            "b83fe8f36f770468c495b446961f4f987bbfcc6d50654705d6a0f79425efd390": {
                "Name": "test-container01",
                "EndpointID": "4d4310a4fd237a3022f5754667a313a609ec93772276a24f9b0cb9f3781af34e",
                "MacAddress": "02:42:0a:32:00:02",
                "IPv4Address": "10.50.0.2/24",
                "IPv6Address": "2***:****:****:****:babe::2/80"
            }
        },
        "Options": {
            "parent": "ens2"
        },
        "Labels": {}
    }
]

Here’s the weird part: ipv6 routing seems to work fine in both directions. The ipv4 routing does not work at all - the container cannot connect to any ipv4 address, inside or outside my LAN, and no machine on my LAN can connect to the container - pings timeout, etc. If I use the ipv6 address, I can connect to the container from another machine just fine. The lack of ipv4 connectivity is a significant limitation.

I tried setting net.ipv4.ip_forward=1 in /etc/sysctl.conf but that doesn’t seem to have any effect. Setting net.ipv6.conf.all.forwarding=1 seems only to break ipv6 routing on the host, undesirable but not really an issue.

Running ip route inside the container gives the following:

default via 10.50.0.1 dev eth0
10.50.0.0/24 dev eth0  proto kernel  scope link  src 10.50.0.2

which seems correct?

If I spin up a second container in the same network, they are able to ping each other with both ipv4 and ipv6.

So what am I missing in my ipv4 configuration?