How to prevent docker from creating virtual interface on wrong private network

Before starting any containers, this is my routing tables:

default via 10.0.2.1 dev ens3 proto dhcp src 10.0.2.218 metric 202 mtu 9000 
10.0.2.0/24 dev ens3 proto dhcp scope link src 10.0.2.218 metric 202 mtu 9000 
169.254.0.0 dev ens3 proto dhcp scope link src 10.0.2.218 metric 202 mtu 9000

And my interfaces:

1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
2: ens3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP mode DEFAULT group default qlen 1000
    link/ether 02:00:17:0a:bf:21 brd ff:ff:ff:ff:ff:ff
    altname enp0s3

After starting a docker container, I get the following (note the new routes on veth* and docker0 ):

default via 10.0.2.1 dev ens3 proto dhcp src 10.0.2.218 metric 202 mtu 9000 
10.0.2.0/24 dev ens3 proto dhcp scope link src 10.0.2.218 metric 202 mtu 9000 
169.254.0.0 dev ens3 proto dhcp scope link src 10.0.2.218 metric 202 mtu 9000
169.254.0.0/16 dev veth68f8dff scope link src 169.254.198.196 metric 227 
172.17.0.0/16 dev docker0 proto kernel scope link src 172.17.0.1

And the following new interfaces:

3: docker0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP mode DEFAULT group default 
    link/ether 02:42:bf:6d:13:61 brd ff:ff:ff:ff:ff:ff
27: veth68f8dff@if26: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master docker0 state UP mode DEFAULT group default 
    link/ether 32:13:97:6d:89:3b brd ff:ff:ff:ff:ff:ff link-netnsid 0

So, a virtual interface was created ( veth68f8dff ), bridged to docker0 , which I expected, but the ip of that interface is on 169.254.0.0/16 instead of the docker network ( 172.17.0.0/16 ), additionally, I have some extra route to 169.254.0.0/16 which prevents me from using any ip address on 169.254.0.0/16 , including my nameserver, which is on 169.254.169.254 .

Is docker creating the virtual interface on the wrong ip address? If so, why and how can I fix it?

What is your host operating system and how did you install Docker?
I can’t even see any “veth*” among the routes (Ubuntu 20.04) after running a container. Without containers even the interface should not exist, so I guess you have a container with a restart policy but it doesn’t explain the IP address. Does your container have multiple networks?

Sorry, I meant to write “after starting a docker container”. Instead of “after starting a docker daemon”. Fix it now.

I am using nixos as a host operating system.

Thanks!

I am not familiar with the Docker installation on NixOS. What instructions did you follow to install Docker?

This could be specific to NixOS. If it is, you may want to ask for help on the NixOS forum

What docker networks do you have on NixOS?

docker network ls

One of my previos questions is still relevant here:

I think nixos doesn’t do anything exotic to install docker. I just enabled docker like this: Docker - NixOS Wiki Which doesn’t create a daemon.json or anything. Though I am running on arm/aarch64.

I am just running a container like this:

docker run \
  --rm \
  --name='osmand-tracker' \
  --log-driver=journald \
  -e 'DATABASE_URL'='/opt/osmand-tracker/data/osmand-tracker.db' \
  -e 'DEV_LOG'='0' \
  -p '127.0.0.1:8002:9000' \
  -v '/var/lib/osmand-tracker/data:/opt/osmand-tracker/data' \
  '--init' \
  simaom/osmand-tracker:arm-latest \
  'osmand-tracker'

So nothing strange. docker network ls shows:

$ docker network ls
NETWORK ID     NAME      DRIVER    SCOPE
ad939b32fd6d   bridge    bridge    local
a78096e41525   host      host      local
fe958b8ef851   none      null      local

And the strangest thing for me is docker network inspect bridge:

[
    {
        "Name": "bridge",
        "Id": "ad939b32fd6dfaf81a2fac78ef9548fe52890781d13d0b1edfab188b77a63139",
        "Created": "2022-05-12T09:54:13.63854824Z",
        "Scope": "local",
        "Driver": "bridge",
        "EnableIPv6": false,
        "IPAM": {
            "Driver": "default",
            "Options": null,
            "Config": [
                {
                    "Subnet": "172.17.0.0/16",
                    "Gateway": "172.17.0.1"
                }
            ]
        },
        "Internal": false,
        "Attachable": false,
        "Ingress": false,
        "ConfigFrom": {
            "Network": ""
        },
        "ConfigOnly": false,
        "Containers": {
            "2b9bf8c0e6df727c8fe1624e71fccc45c053b44427c8e614f84c0db0df0130a4": {
                "Name": "osmand-tracker",
                "EndpointID": "cec9dae963574b2744b936b743dc5d849537fe62389a352d939a44cc268953df",
                "MacAddress": "02:42:ac:11:00:03",
                "IPv4Address": "172.17.0.3/16",
                "IPv6Address": ""
            },
            "86d421b1abe43db279aec9240392ed4f77f2645f34e1af9c62e1f1761cd25a7c": {
                "Name": "panas",
                "EndpointID": "b6844aa420e8c85bd2a14ba2eb84527e379c2e0fb73183f2d72329bef1808fb3",
                "MacAddress": "02:42:ac:11:00:02",
                "IPv4Address": "172.17.0.2/16",
                "IPv6Address": ""
            }
        },
        "Options": {
            "com.docker.network.bridge.default_bridge": "true",
            "com.docker.network.bridge.enable_icc": "true",
            "com.docker.network.bridge.enable_ip_masquerade": "true",
            "com.docker.network.bridge.host_binding_ipv4": "0.0.0.0",
            "com.docker.network.bridge.name": "docker0",
            "com.docker.network.driver.mtu": "1500"
        },
        "Labels": {}
    }
]

Which shows:

            "Config": [
                {
                    "Subnet": "172.17.0.0/16",
                    "Gateway": "172.17.0.1"
                }
            ]

So I don’t know why that route and ips are being configured like that.

yeah, this could be some nixos weird thing, but I don’t see any docker config, so I assume the daemon is running on default settings, but I will check that.

Thanks

The way it can be installed seems “exotic” to me :slight_smile: But the difference is not necessarily in Docker. t can be just related to it. For example some kind of security feature, firewall.

I checked some of my docker installations. I cannot see routes like you have. I hope someone corrects me if I am wrong, but if you have veth* in the list of routes, that means your container has a dedicated route even though the other root should be enough:

It is possible that there is a Docker configuration parameter to fix this, but I can’t think of any. I can’t reproduce it, so I can’t try to fix it, but I am very curious now.

What happens when you run an official container like bash without port forward and any other parameter?

docker run --rm -it --name test1 bash

I checked some of my docker installations. I cannot see routes like you have. I hope someone corrects me if I am wrong, but if you have veth* in the list of routes, that means your container has a dedicated route even though the other root should be enough:

Yeah, I also think the other routes should be enough, but also it seems that somehow the routes are created with the wrong subnet? They are not even created on the 172.17/16 network.

Starting that bash container added another route:

169.254.0.0/16 dev vetheb2c9e6 scope link src 169.254.157.227 metric 237

Also not on the 172.17/16 network. :expressionless:

And the virtual interface created:

40: vethf53f2a2@if39: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master docker0 state UP group default 
    link/ether 02:82:20:7e:25:74 brd ff:ff:ff:ff:ff:ff link-netnsid 2
    inet 169.254.155.247/16 brd 169.254.255.255 scope global noprefixroute vethf53f2a2
       valid_lft forever preferred_lft forever
    inet6 fe80::82:20ff:fe7e:2574/64 scope link 
       valid_lft forever preferred_lft forever

Also, wrong (?) ip address?

OK this was caused by this: Using Docker on AWS EC2 breaks EC2 metadata route because of DHCP · Issue #109389 · NixOS/nixpkgs · GitHub

Thank you for help, it was nixos indeed.