DNS response fails partway

Hi all.

I currently have got a host with one container. The container runs an instance of PiHole, which is my LAN DNS server.
I am now attempting to add another container (nextcloud) but DNS does not work properly.
I can see the DNS request in my PiHole log but the response does not seem to make it back to the container.

The setup
Docker version: 20.10.24+dfsg1

Host 192.168.1.12 (network wide DNS server. This is also set as the DNS for all containers)
PiHole container is on the bridge network.

The bridge network:

[
    {
        "Name": "bridge",
        "Id": "96a0cda68a7eded64abdd1a6bbeffa50ac91571e758ee9d2da326b0c5529fa25",
        "Created": "2025-10-15T22:36:39.908029474+01:00",
        "Scope": "local",
        "Driver": "bridge",
        "EnableIPv6": false,
        "IPAM": {
            "Driver": "default",
            "Options": null,
            "Config": [
                {
                    "Subnet": "172.17.0.0/16",
                    "Gateway": "172.17.0.1"
                }
            ]
        },
        "Internal": false,
        "Attachable": false,
        "Ingress": false,
        "ConfigFrom": {
            "Network": ""
        },
        "ConfigOnly": false,
        "Containers": {
            "5cb9465028518e58f715275dffbeb534620bd9c96f115eabe88a4a755f6b96b8": {
                "Name": "pihole",
                "EndpointID": "940ba86c6fff2b7088dc8b2e7680268b877e834046fc4a1b0c2d1538c51d9d58",
                "MacAddress": "02:42:ac:11:00:02",
                "IPv4Address": "172.17.0.2/16",
                "IPv6Address": ""
            }
        },
        "Options": {
            "com.docker.network.bridge.default_bridge": "true",
            "com.docker.network.bridge.enable_icc": "true",
            "com.docker.network.bridge.enable_ip_masquerade": "true",
            "com.docker.network.bridge.host_binding_ipv4": "0.0.0.0",
            "com.docker.network.bridge.name": "docker0",
            "com.docker.network.driver.mtu": "1500"
        },
        "Labels": {}
    }
]

The network request from the container in the PiHole dashboard (note that the IP address logges is the IP of the default gateway of the bridge network).

If I force the IP address of the DNS server for the nextcloud container to be the IP of PiHole on the bridge network (172.17.0.2) DNS resolution works. I don’t want this though as it is not guaranteed that DNS settings will be the same if I use my docker compose file on another host.

When I create the nextcloud container I can see it appear on the bridge network too.

I’m the route should be something like this:
nextcloud container → 192.168.1.12 (IP of the host) → PiHole container → ???

Any idea why the response from DNS doesn’t make it back into the nextcloud container?

You’re running into a hairpin/NAT-reflection edge case: Nextcloud → host IP:53 → Pi-hole (container) → (reply) often breaks for UDP when the service is on the same host. That’s why pointing to 172.17.0.2 works.

Fixes:

  1. Enable Docker’s userland proxy (helps container→hostIP→container flows, especially UDP):
    /etc/docker/daemon.json → { "userland-proxy": true }, restart Docker, ensure Pi-hole publishes 53/udp + 53/tcp.
  2. Avoid hairpin entirely by giving Pi-hole a stable address:
    • user-defined bridge with static IP (reference via .env), or
    • macvlan with a real LAN IP (e.g., 192.168.1.50), or
    • host network for Pi-hole (Linux), so the host truly answers DNS.

Quick test from Nextcloud:
dig @192.168.1.12 docker.com and dig +tcp … → if TCP works and UDP fails, it’s the hairpin path.

Background refs on userland-proxy vs hairpin: Docker docs/issues and community notes.

1 Like

Thank you for the insights. I used network_mode: host and everything works now.