Accessing to external smtp server from a docker container

Hi!

I’m building a web apps using docker compose with several containers, all on a same custom network (bridge I guess). Everything works fine except that I cannot access to the smtp server provided by my company from my backend container (which uses Django).

From my backend container I can ping external world (like google.com or apple.com) but I cannot ping the smtp server. I can ping it from the host and if I use the network host in my backend container, it works too.

Any idea about what could be wrong in my setup ? As the smtp server does not require any credentials (login/password) I’m wondering if there is an IP filtering to just allow access to the VM they provided to me (the host is a VM on Ubuntu server).

Or could it be due to something related to iptables ?

if you have some ideas to check, I would really appreciate them !

Have a nice day,
Remy

Can you share the output of docker inspect for the network to make sure that it is a bridge network?

Bridge networks are private subnets, known internal to the docker host. Outgoing traffic is natted, so that outgoing traffic will be seen as if the traffic originates from one of the nics of the vm. Docker does not filter any outgoing traffic from a container to a remote destination.

When you say you can ping from the host, you mean from the vm used as docker host, right?

Hi meyay!

So the output of docker inspect <MY_BACKEND_CONTAINER (networking part) is:

"NetworkSettings": {
            "Bridge": "",
            "SandboxID": "d4cfa57c0312b2de6838c11f1accba0298a45158481b4afcd3e3cb4cf643adc9",
            "SandboxKey": "/var/run/docker/netns/d4cfa57c0312",
            "Ports": {
                "8000/tcp": [
                    {
                        "HostIp": "0.0.0.0",
                        "HostPort": "8000"
                    },
                    {
                        "HostIp": "::",
                        "HostPort": "8000"
                    }
                ]
            },
            "HairpinMode": false,
            "LinkLocalIPv6Address": "",
            "LinkLocalIPv6PrefixLen": 0,
            "SecondaryIPAddresses": null,
            "SecondaryIPv6Addresses": null,
            "EndpointID": "",
            "Gateway": "",
            "GlobalIPv6Address": "",
            "GlobalIPv6PrefixLen": 0,
            "IPAddress": "",
            "IPPrefixLen": 0,
            "IPv6Gateway": "",
            "MacAddress": "",
            "Networks": {
                "mosaic-network-staging": {
                    "IPAMConfig": null,
                    "Links": null,
                    "Aliases": [
                        "mosaic-backend-staging",
                        "backend"
                    ],
                    "MacAddress": "76:f5:e6:f9:4b:c4",
                    "DriverOpts": null,
                    "GwPriority": 0,
                    "NetworkID": "ad8773224696d13baa1c5c3f8b24e35b680edbdc4081b073e2710a0aede392bf",
                    "EndpointID": "8a372305f2547b0f7d73d9c7df54b98138ed6d85ad35931d3531c34b5e83988b",
                    "Gateway": "172.18.0.1",
                    "IPAddress": "172.18.0.14",
                    "IPPrefixLen": 16,
                    "IPv6Gateway": "",
                    "GlobalIPv6Address": "",
                    "GlobalIPv6PrefixLen": 0,
                    "DNSNames": [
                        "mosaic-backend-staging",
                        "backend",
                        "5569d8d4a825"
                    ]
                }
            }
        }

Yes, I can ping from the VM where I execute docker compose up -d (so which is the docker host)

In fact, if I ping from the VM provided by my company it works but if I ping from my laptop (even connected to the VPN of my company) it fails.

That’s why I thought about some filtering on the SMTP server side but if my backend docker container is seen as the VM, it should work I guess.

Actually the docker inspect <network name> is what I was locking for. The container configuration does not tell us network details.

Like I said if it’s a bridge network, the traffic is natted. More precise, it is source natted (SNAT), where the source ip of the container is replaced with one of the host interfaces ip when it leaves the host. This is true for ipv4 traffic.

DId you try to exec a ping -6 <smtp server name or ip> and ping -4 <smtp server name or ip>?

Ah ok, here is the docker inspect mosaic-network-staging output:

[
    {
        "Name": "mosaic-network-staging",
        "Id": "ad8773224696d13baa1c5c3f8b24e35b680edbdc4081b073e2710a0aede392bf",
        "Created": "2025-10-22T16:13:36.454096202+02:00",
        "Scope": "local",
        "Driver": "bridge",
        "EnableIPv4": true,
        "EnableIPv6": false,
        "IPAM": {
            "Driver": "default",
            "Options": null,
            "Config": [
                {
                    "Subnet": "172.18.0.0/16",
                    "Gateway": "172.18.0.1"
                }
            ]
        },
        "Internal": false,
        "Attachable": false,
        "Ingress": false,
        "ConfigFrom": {
            "Network": ""
        },
        "ConfigOnly": false,
        "Containers": {...}
"Options": {},
        "Labels": {
            "com.docker.compose.config-hash": "98ce5913b74a5e98294da844f9271fa24bb4ffebf5e40b9c480267f5d97142f8",
            "com.docker.compose.network": "mosaic-network-staging",
            "com.docker.compose.project": "mosaic-staging",
            "com.docker.compose.version": "2.40.1"
        }
}

I actually just use the ping <SMTP_SERVER> command. Might be a IPv6 issue as the docker network as EnableIPv6 set to false.

I tried ping -4 <SMTP_SERVER> from the host, it fails, while ping -6 <SMTP_SERVER> works fine.

That’s surprising. Honestly, I would have expected ipv4 traffic to work, and ipv6 to fail.
Can you share the output of ip address show from inside the container?

I’ve enabled ipv6 on my custom network but here is the output of ip address show:

❯ ip address show
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host noprefixroute
       valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000
    link/ether 02:00:82:68:xxxx:xxxx brd ff:ff:ff:ff:ff:ff
    altname enp1s0
    inet 130.xxx.xxx.xxx/24 brd 130.xxx.xxx.xxx scope global eth0
       valid_lft forever preferred_lft forever
    inet6 2001:6a8:xxxx:xxxx:0:82ff:xxxx:xxxx/64 scope global
       valid_lft forever preferred_lft forever
    inet6 fe80::82ff:xxxx:xxxx/64 scope link
       valid_lft forever preferred_lft forever
3: docker0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default
    link/ether 6e:09:0e:xx:xx:xx brd ff:ff:ff:ff:ff:ff
    inet 172.17.0.1/16 brd 172.17.255.255 scope global docker0
       valid_lft forever preferred_lft forever
    inet6 fe80::6c09:eff:xxxx:xxxx/64 scope link
       valid_lft forever preferred_lft forever
1050: vethece4991@if2: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master br-ca58d1a2bc51 state UP group default
    link/ether 3e:66:44:2f:18:cb brd ff:ff:ff:ff:ff:ff link-netnsid 0
    inet6 fe80::3c66:44ff:fe2f:18cb/64 scope link
       valid_lft forever preferred_lft forever
1051: vethe89b6f7@if2: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master br-ca58d1a2bc51 state UP group default
    link/ether c2:6c:68:4d:a3:62 brd ff:ff:ff:ff:ff:ff link-netnsid 1
    inet6 fe80::c06c:68ff:fe4d:a362/64 scope link
       valid_lft forever preferred_lft forever
1052: vethb7bbc79@if2: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master br-ca58d1a2bc51 state UP group default
    link/ether 0a:50:18:3b:19:1b brd ff:ff:ff:ff:ff:ff link-netnsid 2
    inet6 fe80::850:18ff:fe3b:191b/64 scope link
       valid_lft forever preferred_lft forever
1053: veth40a351f@if2: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master br-ca58d1a2bc51 state UP group default
    link/ether 26:5a:d3:1e:16:ec brd ff:ff:ff:ff:ff:ff link-netnsid 3
    inet6 fe80::245a:d3ff:fe1e:16ec/64 scope link
       valid_lft forever preferred_lft forever
1054: veth65a6c71@if2: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master br-ca58d1a2bc51 state UP group default
    link/ether 56:99:d7:46:13:21 brd ff:ff:ff:ff:ff:ff link-netnsid 4
    inet6 fe80::5499:d7ff:fe46:1321/64 scope link
       valid_lft forever preferred_lft forever
1055: veth2be7278@if2: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master br-ca58d1a2bc51 state UP group default
    link/ether 82:a7:a7:e6:57:f4 brd ff:ff:ff:ff:ff:ff link-netnsid 5
    inet6 fe80::80a7:a7ff:fee6:57f4/64 scope link
       valid_lft forever preferred_lft forever
1056: veth0182a7c@if2: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master br-ca58d1a2bc51 state UP group default
    link/ether da:26:2f:da:26:29 brd ff:ff:ff:ff:ff:ff link-netnsid 6
    inet6 fe80::d826:2fff:feda:2629/64 scope link
       valid_lft forever preferred_lft forever
1057: veth3941d30@if2: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master br-ca58d1a2bc51 state UP group default
    link/ether d6:e7:ec:d9:f8:4d brd ff:ff:ff:ff:ff:ff link-netnsid 7
    inet6 fe80::d4e7:ecff:fed9:f84d/64 scope link
       valid_lft forever preferred_lft forever
1058: veth5d17b31@if2: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master br-ca58d1a2bc51 state UP group default
    link/ether 8a:a4:aa:7a:94:15 brd ff:ff:ff:ff:ff:ff link-netnsid 8
    inet6 fe80::88a4:aaff:fe7a:9415/64 scope link
       valid_lft forever preferred_lft forever
1059: vetha54f583@if2: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master br-ca58d1a2bc51 state UP group default
    link/ether 4a:be:31:a5:62:78 brd ff:ff:ff:ff:ff:ff link-netnsid 9
    inet6 fe80::48be:31ff:fea5:6278/64 scope link
       valid_lft forever preferred_lft forever
1060: vethfc67d1d@if2: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master br-ca58d1a2bc51 state UP group default
    link/ether 36:65:71:aa:4b:5e brd ff:ff:ff:ff:ff:ff link-netnsid 10
    inet6 fe80::3465:71ff:feaa:4b5e/64 scope link
       valid_lft forever preferred_lft forever
1062: vethe6cad6d@if2: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master br-ca58d1a2bc51 state UP group default
    link/ether 82:84:42:3a:74:06 brd ff:ff:ff:ff:ff:ff link-netnsid 12
    inet6 fe80::8084:42ff:fe3a:7406/64 scope link
       valid_lft forever preferred_lft forever
1064: vethc2ee956@if2: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master br-ca58d1a2bc51 state UP group default
    link/ether 52:de:b9:88:23:b1 brd ff:ff:ff:ff:ff:ff link-netnsid 14
    inet6 fe80::50de:b9ff:fe88:23b1/64 scope link
       valid_lft forever preferred_lft forever
963: br-ca58d1a2bc51: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default
    link/ether 66:cf:6f:7c:6c:12 brd ff:ff:ff:ff:ff:ff
    inet 172.18.0.1/16 brd 172.18.255.255 scope global br-ca58d1a2bc51
       valid_lft forever preferred_lft forever
    inet6 fd9d:7df4:cbf6::1/64 scope global nodad
       valid_lft forever preferred_lft forever
    inet6 fe80::64cf:6fff:fe7c:6c12/64 scope link
       valid_lft forever preferred_lft forever

This must be the output from the host. I took the liberty to anonymize information on your eth0 and docker0 interfaces, as they contained public ipv4 and public ipv6 ips. I left the veth interfaces untouched.

I am curious about the output of this:

docker exec <MY_BACKEND_CONTAINER> ip address show

Oups, sorry … Thanks for anonymizing :+1:

Here is the output:

❯ docker exec d561f36505a4 ip address show
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host proto kernel_lo
       valid_lft forever preferred_lft forever
2: eth0@if1068: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default
    link/ether 3a:a4:d6:fd:0a:f9 brd ff:ff:ff:ff:ff:ff link-netnsid 0
    inet 172.18.0.13/16 brd 172.18.255.255 scope global eth0
       valid_lft forever preferred_lft forever
    inet6 fd9d:7df4:cbf6::d/64 scope global nodad
       valid_lft forever preferred_lft forever
    inet6 fe80::38a4:d6ff:fefd:af9/64 scope link proto kernel_ll
       valid_lft forever preferred_lft forever

Most likely your ipv6 ULA subnet is accepted by your smtp server.

You might want to ask the operator of the smtp server to add your vm’s ipv4 to the list of accepted hosts as well. As I said, ipv4 traffic will be source natted, so it will appear as originating from your vm’s ipv4.

1 Like

Ok, I sent a message to the IT service center :+1:

But when I tried to use the network host instead of the bridge one, it worked fine. And when the full application was manually deployed without Docker, it worked fine too. So, I guess, there is something wrong in the “docker layer”. What do you think about that ?

First of all I might be mistaken regarding the ipv6 ULA. It could be that ipv6 is natted as well, because I don’t see the ULA prefix fd9d:7df4:cbf6:: on your eth0 interface, so it might be natted as well. If it’s the case my mental model needs an update :slight_smile:

Can you try ping -4 and ping -6 on the host as well?
And can you try an nslookup <smtp server> on the host and inside the container (with docker exec, like we did for ip address show earlier)?

It should allow us to see if the host succeeds with ipv4, and what ips are resolved for the smtp server’s dns name.

My first though was maybe the os inside the container prefers the ipv4 stack, while the host prefers the ipv6 stack, but I doubt it’s the reason because it works if --network host is used. So far I have no explanation as of why it doesn’t work with bridge network.

So, from the host/the VM, ping -4 fails while ping -6 works.

nslookup gives me the same output (ip v4/v6 addresses of the smtp server).

So ipv6 must be natted then with recent docker versions when a bridge network is used.

I would recommend asking the IT service center to whitelist your host’s ipv4 address for the smtp as well. Though, what remains a mystery is why the container prefers the ipv6 address when --network host is used, but does not when a bridge network is used.

Ok, thank you so much for your help! I will see on monday (Europe timezone) with the IT service :+1:

I keep you informed about the solution :wink:

You’re going to laugh but I was just using the wrong smtp port … 1025 instead of 25 :man_facepalming: :face_with_peeking_eye:
Probably because in dev mode I use mailpit and the default port is 1025.

Sorry to having bothered you about this issue. At least I learnt some things on the way about Docker.

Best regards!

Don’t worry those things happen sometimes :rofl:

Have a great day!

1 Like

I tested it:

  • download traefik’s whoami release from https://github.com/traefik/whoami/releases, extracted and ran it on a test host.
  • create an ipv6 bridge network: docker network create --ipv6 ipv6_test
  • being lazy as I am I run wget against the hostname of the test host to get the response from whoami, which shows all ips of my test host that runs whoami: docker run -it --rm --network ipv6_test alpine wget -q -O - http://<test hostname or ip>
  • test ipv6 ULA: docker run -it --rm --network ipv6_test alpine wget -q -O - http://[test host ipv6 ULA]
    • RemoteAddr shows one of docker host’s ipv6 ULA → Natted!
  • test ipv6 GA: docker run -it --rm --network ipv6_test alpine wget -q -O - http://[test host ipv6 GA]
    • RemoteAddr shows of of docker host’s ipv6 GA → Natted!

The response to the whoami service shows the network IPs of the host that runs the service, and the remote ip address of the client that called it.

It was indeed time to update the mental model, as the default gateway mode for outgoing ipv6 traffic is nat since docker v27.

Might be noteworthy, that the gateway mode can be set to routed, if the network is created like this: docker network create --ipv6 -o com.docker.network.bridge.gateway_mode_ipv6=routed ipv6_test

1 Like

Hehe, thanks for this analysis :+1: