Docker Community Forums

Share and learn in the Docker community.

No connection inside container when using a user defined bridge

Hey folks,

I’m new to docker but have been reading and trying a lot in the last days. First some words to the host: it’s a Terramaster F2-221 NAS running its own “TOS”. I can connect via ssh and use docker + docker-compose. The TOS has a docker web gui, which is pretty limited to downloading and running images from the docker hub.

Container on the default bridge (“bridge”) are running fine, but now I want to use other networks for “isolation”. On the long run I would like to use a reverse proxy (probably Traefik) and some services like a nextcloud stack. I’m not sure where to start debugging, so I’m dropping some system info first:

$ docker version
Client:
 Version:      17.05.0-ce
 API version:  1.29
 Go version:   go1.9
 Git commit:   89658be
 Built:        Tue Apr  3 17:09:54 CST 2018
 OS/Arch:      linux/amd64

Server:
 Version:      17.05.0-ce
 API version:  1.29 (minimum version 1.12)
 Go version:   go1.9
 Git commit:   89658be
 Built:        Tue Apr  3 17:09:54 CST 2018
 OS/Arch:      linux/amd64
 Experimental: false


$ docker-compose version
docker-compose version 1.27.4, build 40524192
docker-py version: 4.3.1
CPython version: 3.7.7
OpenSSL version: OpenSSL 1.1.0l  10 Sep 2019


$ docker info
Containers: 4
 Running: 0
 Paused: 0
 Stopped: 4
Images: 6
Server Version: 17.05.0-ce
Storage Driver: btrfs
 Build Version: Btrfs v4.13.3
 Library Version: 102
Logging Driver: json-file
Cgroup Driver: cgroupfs
Plugins: 
 Volume: local
 Network: bridge host macvlan null overlay
Swarm: inactive
Runtimes: runc
Default Runtime: runc
Init Binary: docker-init
containerd version: 9048e5e50717ea4497b757314bad98ea3763c145
runc version: N/A (expected: 9c2d8d184e5da67c95d601382adf14862e4f2228)
init version: N/A (expected: )
Kernel Version: 4.13.16
Operating System: Tnas 2018.04.9
OSType: linux
Architecture: x86_64
CPUs: 2
Total Memory: 1.795GiB
Name: nas
ID: BJKM:4KAZ:YVN5:CGJ3:3QKI:ECOJ:5DYV:HHDR:C35N:MUZ6:QUSF:AKAE
Docker Root Dir: /mnt/md0/appdata/docker
Debug Mode (client): false
Debug Mode (server): false
Registry: https://index.docker.io/v1/
Experimental: false
Insecure Registries:
 127.0.0.0/8
Registry Mirrors:
 https://registry.hub.docker.com/
Live Restore Enabled: false

Now I’m creating a new container on the default bridge:

$ docker run --name debtest_defaultbridge -it debian:buster
root@1e0bafecff04:/# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
2: tunl0@NONE: <NOARP> mtu 1480 qdisc noop state DOWN group default qlen 1000
    link/ipip 0.0.0.0 brd 0.0.0.0
3: gre0@NONE: <NOARP> mtu 1476 qdisc noop state DOWN group default qlen 1000
    link/gre 0.0.0.0 brd 0.0.0.0
4: gretap0@NONE: <BROADCAST,MULTICAST> mtu 1462 qdisc noop state DOWN group default qlen 1000
    link/ether 00:00:00:00:00:00 brd ff:ff:ff:ff:ff:ff
5: ip_vti0@NONE: <NOARP> mtu 1332 qdisc noop state DOWN group default qlen 1000
    link/ipip 0.0.0.0 brd 0.0.0.0
6: sit0@NONE: <NOARP> mtu 1480 qdisc noop state DOWN group default qlen 1000
    link/sit 0.0.0.0 brd 0.0.0.0
7: ip6tnl0@NONE: <NOARP> mtu 1452 qdisc noop state DOWN group default qlen 1000
    link/tunnel6 :: brd ::
8: ip6gre0@NONE: <NOARP> mtu 1448 qdisc noop state DOWN group default qlen 1000
    link/gre6 :: brd ::
14: eth0@if15: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default 
    link/ether 02:42:ac:11:00:02 brd ff:ff:ff:ff:ff:ff link-netnsid 0
    inet 172.17.0.2/16 scope global eth0
       valid_lft forever preferred_lft forever
root@1e0bafecff04:/# ping google.com
PING google.com (172.217.22.110) 56(84) bytes of data.
64 bytes from fra15s18-in-f14.1e100.net (172.217.22.110): icmp_seq=1 ttl=118 time=23.3 ms
64 bytes from fra15s18-in-f14.1e100.net (172.217.22.110): icmp_seq=2 ttl=118 time=21.6 ms
64 bytes from fra15s18-in-f14.1e100.net (172.217.22.110): icmp_seq=3 ttl=118 time=22.9 ms
^C
--- google.com ping statistics ---
3 packets transmitted, 3 received, 0% packet loss, time 5ms
rtt min/avg/max/mdev = 21.624/22.586/23.262/0.698 ms

This is fine. Whereas creating a new bridge and spinning up a container on this one fails:

$ docker network create mybridge
c354ca89e55b65beb30f5aff2c93d3e3a7f5aa772369ca255a1089b81f2a91de
$ docker run --name debtest_mybridge --network mybridge -it debian:buster
root@f585e9e9d2c7:/# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
2: tunl0@NONE: <NOARP> mtu 1480 qdisc noop state DOWN group default qlen 1000
    link/ipip 0.0.0.0 brd 0.0.0.0
3: gre0@NONE: <NOARP> mtu 1476 qdisc noop state DOWN group default qlen 1000
    link/gre 0.0.0.0 brd 0.0.0.0
4: gretap0@NONE: <BROADCAST,MULTICAST> mtu 1462 qdisc noop state DOWN group default qlen 1000
    link/ether 00:00:00:00:00:00 brd ff:ff:ff:ff:ff:ff
5: ip_vti0@NONE: <NOARP> mtu 1332 qdisc noop state DOWN group default qlen 1000
    link/ipip 0.0.0.0 brd 0.0.0.0
6: sit0@NONE: <NOARP> mtu 1480 qdisc noop state DOWN group default qlen 1000
    link/sit 0.0.0.0 brd 0.0.0.0
7: ip6tnl0@NONE: <NOARP> mtu 1452 qdisc noop state DOWN group default qlen 1000
    link/tunnel6 :: brd ::
8: ip6gre0@NONE: <NOARP> mtu 1448 qdisc noop state DOWN group default qlen 1000
    link/gre6 :: brd ::
17: eth0@if18: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default 
    link/ether 02:42:ac:13:00:02 brd ff:ff:ff:ff:ff:ff link-netnsid 0
    inet 172.19.0.2/16 scope global eth0
       valid_lft forever preferred_lft forever
root@f585e9e9d2c7:/# ping google.com
ping: google.com: Temporary failure in name resolution

I’m not sure why the newly created network is not going outside. Reading on google I take the guess, that this isn’t normal and the container should connect just fine. Where should I start looking?

Some more info about the network:

nas$ ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
3: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
    link/ether 6c:bf:b5:01:5f:56 brd ff:ff:ff:ff:ff:ff
    inet 192.168.178.36/24 brd 192.168.178.255 scope global eth0
       valid_lft forever preferred_lft forever
    inet6 2001:16b8:626c:2300:6ebf:b5ff:fe01:5f56/64 scope global dynamic mngtmpaddr 
       valid_lft 7126sec preferred_lft 3526sec
    inet6 fe80::6ebf:b5ff:fe01:5f56/64 scope link 
       valid_lft forever preferred_lft forever
4: tunl0@NONE: <NOARP> mtu 1480 qdisc noop state DOWN group default qlen 1000
    link/ipip 0.0.0.0 brd 0.0.0.0
5: gre0@NONE: <NOARP> mtu 1476 qdisc noop state DOWN group default qlen 1000
    link/gre 0.0.0.0 brd 0.0.0.0
6: gretap0@NONE: <BROADCAST,MULTICAST> mtu 1462 qdisc noop state DOWN group default qlen 1000
    link/ether 00:00:00:00:00:00 brd ff:ff:ff:ff:ff:ff
7: ip_vti0@NONE: <NOARP> mtu 1332 qdisc noop state DOWN group default qlen 1000
    link/ipip 0.0.0.0 brd 0.0.0.0
8: sit0@NONE: <NOARP> mtu 1480 qdisc noop state DOWN group default qlen 1000
    link/sit 0.0.0.0 brd 0.0.0.0
9: ip6tnl0@NONE: <NOARP> mtu 1452 qdisc noop state DOWN group default qlen 1000
    link/tunnel6 :: brd ::
10: ip6gre0@NONE: <NOARP> mtu 1448 qdisc noop state DOWN group default qlen 1000
    link/gre6 00:00:00:00:00:00:00:00:00:00:00:00:00:00:00:00 brd 00:00:00:00:00:00:00:00:00:00:00:00:00:00:00:00
11: eth1: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc pfifo_fast state DOWN group default qlen 1000
    link/ether 6c:bf:b5:01:5f:57 brd ff:ff:ff:ff:ff:ff
    inet 169.254.253.97/16 brd 169.254.255.255 scope global eth1
       valid_lft forever preferred_lft forever
12: docker0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default 
    link/ether 02:42:a5:9e:60:e4 brd ff:ff:ff:ff:ff:ff
    inet 172.17.0.1/16 scope global docker0
       valid_lft forever preferred_lft forever
    inet6 fe80::42:a5ff:fe9e:60e4/64 scope link 
       valid_lft forever preferred_lft forever
13: br-3a3df5631098: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default 
    link/ether 02:42:8f:99:f5:7f brd ff:ff:ff:ff:ff:ff
    inet 169.254.251.146/16 brd 169.254.255.255 scope global br-3a3df5631098
       valid_lft forever preferred_lft forever
    inet6 fe80::42:8fff:fe99:f57f/64 scope link 
       valid_lft forever preferred_lft forever
16: br-c354ca89e55b: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default 
    link/ether 02:42:2f:d3:0d:17 brd ff:ff:ff:ff:ff:ff
    inet 169.254.251.206/16 brd 169.254.255.255 scope global br-c354ca89e55b
       valid_lft forever preferred_lft forever
    inet6 fe80::42:2fff:fed3:d17/64 scope link 
       valid_lft forever preferred_lft forever
26: vethe4b1122@if25: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master docker0 state UP group default 
    link/ether ca:a0:28:21:c0:30 brd ff:ff:ff:ff:ff:ff link-netnsid 0
    inet6 fe80::c8a0:28ff:fe21:c030/64 scope link 
       valid_lft forever preferred_lft forever
28: veth4b52ed7@if27: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master br-c354ca89e55b state UP group default 
    link/ether f6:d1:a2:d8:9b:83 brd ff:ff:ff:ff:ff:ff link-netnsid 1
    inet6 fe80::f4d1:a2ff:fed8:9b83/64 scope link 
       valid_lft forever preferred_lft forever


$ bridge link
26: vethe4b1122 state UP @(null): <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 master docker0 state forwarding priority 32 cost 2 
28: veth4b52ed7 state UP @(null): <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 master br-c354ca89e55b state forwarding priority 32 cost 2


$ docker network ls
NETWORK ID          NAME                DRIVER              SCOPE
32575a4f6271        bridge              bridge              local
72a26dcb7181        host                host                local
c354ca89e55b        mybridge            bridge              local
0347731fdfb6        none                null                local
3a3df5631098        web                 bridge              local

$ docker network inspect bridge
[
    {
        "Name": "bridge",
        "Id": "32575a4f627165d849f8b6b6c3e9d8617996d0dbeebffd4cdf55e699e62de7db",
        "Created": "2021-01-06T17:43:54.608794333+01:00",
        "Scope": "local",
        "Driver": "bridge",
        "EnableIPv6": false,
        "IPAM": {
            "Driver": "default",
            "Options": null,
            "Config": [
                {
                    "Subnet": "172.17.0.0/16",
                    "Gateway": "172.17.0.1"
                }
            ]
        },
        "Internal": false,
        "Attachable": false,
        "Ingress": false,
        "Containers": {
            "1e0bafecff047c04efd428e0d11c8379db13cb8813413b0f5ff8b3fbec4cb577": {
                "Name": "debtest_defaultbridge",
                "EndpointID": "55682bea0b92474122c5bd4c290d8d3e0b423167774dfb9c92d3608b5b908a83",
                "MacAddress": "02:42:ac:11:00:02",
                "IPv4Address": "172.17.0.2/16",
                "IPv6Address": ""
            }
        },
        "Options": {
            "com.docker.network.bridge.default_bridge": "true",
            "com.docker.network.bridge.enable_icc": "true",
            "com.docker.network.bridge.enable_ip_masquerade": "true",
            "com.docker.network.bridge.host_binding_ipv4": "0.0.0.0",
            "com.docker.network.bridge.name": "docker0",
            "com.docker.network.driver.mtu": "1500"
        },
        "Labels": {}
    }
]


$ docker network inspect mybridge
[
    {
        "Name": "mybridge",
        "Id": "c354ca89e55b65beb30f5aff2c93d3e3a7f5aa772369ca255a1089b81f2a91de",
        "Created": "2021-01-06T19:19:27.952113831+01:00",
        "Scope": "local",
        "Driver": "bridge",
        "EnableIPv6": false,
        "IPAM": {
            "Driver": "default",
            "Options": {},
            "Config": [
                {
                    "Subnet": "172.19.0.0/16",
                    "Gateway": "172.19.0.1"
                }
            ]
        },
        "Internal": false,
        "Attachable": false,
        "Ingress": false,
        "Containers": {
            "f585e9e9d2c788707c508b0e20ba6a197aea3eda401eeb2356a677eb71e09741": {
                "Name": "debtest_mybridge",
                "EndpointID": "447a002ad82a66d290975677dbd56f0265eb45157fafeba1cb27bb969d4903ca",
                "MacAddress": "02:42:ac:13:00:02",
                "IPv4Address": "172.19.0.2/16",
                "IPv6Address": ""
            }
        },
        "Options": {},
        "Labels": {}
    }
]

How can I proceed from here? Is the NAS OS playing tricks on me? As far as I can see there is iptables running, but no other things like ufw or anything, that would block outgoing traffic.

Let me restructure my question. When I create a bridge, any container using that one has no connection at all.

WAN/LAN <–> HOST <–default bridge debtest_defaultbridge = Network (ping + dns) works fine

WAN/LAN <–> HOST <–mybridge–> debtest_mybridge = No connection, no dns lookup etc.

This isn’t intended, is it?

Maybe I misunderstood something, so my precise question is: are containers capable of reaching the outer net when running on a non-default bridge? Think about a scenario where you have the following setup:

WAN/LAN → Host ← default bridge C1 reverse proxy on :80/:443 ← custom bridge → C2 nextcloud. The connection TO nextcloud works, but nextcloud has absolutely no connection to anything. nslookup is failing (nameserver 127.0.0.11 which should be docker-proxy, I guess). Sites like apps.nextcloud.com nor the update checker are reachable. This is the actual case here, which I’m trying to solve. :confused:

If there were no docker in between (like with normal linux network tools + lxc) I would think of something like a missing route or misplaced iptables setting, but I have no clue if I’m on the wrong track here.

I’ve found a solution to this. I’m not sure if I’m breaking anything of the docker insides, but it works how I expect it. Here’s a post of what I needed to do: https://blog.mukmuk.eu/2021/docker-0-me-1/