Using container as DNS server for another compose app?

Hi,

I have a docker compose + dockge installed on a Debian LXC container running on FreeNAS. I’m running several home services, such as tt-rss and nextcloud, with caddy providing reverse proxy support. In order to resolve the reverse proxied URLs inside the LAN, I also installed Pi-hole as another compose.yml container, configured it to resolve those URLS to the local IP, and set my router to hand out the docker-host IP (with port 53 forwarded to pihole) as the DNS server. This works great! Except that the docker-host IP gets flowed from TrueNAS into the docker-host’s resolv.conf, and then into docker where it breaks all DNS resolution for containers. I suspect this is something akin to NAT loopback, where the connection isn’t allowed to go down to the docker host’s IP and back up to the container.

To finish the nextcloud-aio install, I do need the nextcloud URL to resolve locally from within the nextcloud containers. So if I configure the pihole IP on its docker-network to be static, add the pihole network to the other compose file, and manually add “dns: pi.hole.ip.addr” then that container will be able to resolve DNS again! But nextcloud-aio creates sub-containers which I don’t see a way to do this for. Those sub containers (such as nextcloud-aio-nextcloud) are unable to resolve DNS queries.

I believe my lack of knowledge around docker networks is making this way harder than it needs to be. If you know of the right way to set this up in docker compose please let me know!

caddy compose:

services:
  caddy:
    image: lucaslorentz/caddy-docker-proxy:2.9.1-alpine
    ports:
      - 80:80
      - 443:443
    environment:
      - CADDY_INGRESS_NETWORKS=caddy
      - CADDY_DOCKER_CADDYFILE_PATH=/etc/caddy/Caddyfile
    networks:
      caddy:
        priority: 100
      nextcloud-aio:
        priority: 10
    dns: 172.28.0.4
    volumes:
      - /var/run/docker.sock:/var/run/docker.sock
      - caddy_data:/data
      - /mnt/data/caddy/caddyfile/Caddyfile:/etc/caddy/Caddyfile
    restart: unless-stopped
networks:
  caddy:
    external: true
  nextcloud-aio:
    external: true
volumes:
  caddy_data: {}

pihole compose:

# More info at https://github.com/pi-hole/docker-pi-hole/ and https://docs.pi-hole.net/
services:
  pihole:
    container_name: pihole
    image: pihole/pihole:latest
    ports:
      - 53:53/tcp
      - 53:53/udp
      - 8060:80/tcp
    environment:
      TZ: America/Chicago
      WEBPASSWORD: mypassword
    # Volumes store your data between container upgrades
    volumes:
      - /mnt/data/pi-hole/etc:/etc/pihole
      - /mnt/data/pi-hole/dnsmasq-d:/etc/dnsmasq.d
    networks:
      caddy:
        ipv4_address: 172.28.0.4
      default: null
    labels:
      caddy: domain.name
      caddy.reverse_proxy: "{{upstreams 80}}"
      caddy.redir: / /admin/
    restart: unless-stopped
networks:
  caddy:
    external: true

nextcloud compose:

services:
  nextcloud-aio-mastercontainer:
    image: nextcloud/all-in-one:latest
    init: true
    restart: unless-stopped
    container_name: nextcloud-aio-mastercontainer # This line is not allowed to be changed as otherwise AIO will not work correctly
    volumes:
      - nextcloud_aio_mastercontainer:/mnt/docker-aio-config # This line is not allowed to be changed as otherwise the built-in backup solution will not work
      - /var/run/docker.sock:/var/run/docker.sock:ro # May be changed on macOS, Windows or docker rootless. See the applicable documentation. If adjusting, don't forget to also set 'WATCHTOWER_DOCKER_SOCKET_PATH'!
    #network_mode: bridge # add to the same network as docker run would do
    dns: 172.28.0.4
    # The three lines below were to freeze the container while running to debug it.
    #command: -F anything
    #entrypoint: /usr/bin/tail
    #tty: true
    networks:
      - caddy
      - nextcloud-aio
    ports:
      #- 8070:80 # Can be removed when running behind a web server or reverse proxy (like Apache, Nginx, Caddy, Cloudflare Tunnel and else). See https://github.com/nextcloud/all-in-one/blob/main/reverse-proxy.md
      - 8088:8080
      #- 8443:8443 # Can be removed when running behind a web server or reverse proxy (like Apache, Nginx, Caddy, Cloudflare Tunnel and else). See https://github.com/nextcloud/all-in-one/blob/main/reverse-proxy.md
    labels:
      caddy: cloud.domain.name
      caddy.reverse_proxy: nextcloud-aio-apache:11000
      #caddy.reverse_proxy: nextcloud-aio-domaincheck:11000 # Used for initial setup
    environment:
      APACHE_PORT: 11000 # Is needed when running behind a web server or reverse proxy (like Apache, Nginx, Caddy, Cloudflare Tunnel and else). See https://github.com/nextcloud/all-in-one/blob/main/reverse-proxy.md
      APACHE_IP_BINDING: 0.0.0.0 # Should be set when running behind a web server or reverse proxy (like Apache, Nginx, Caddy, Cloudflare Tunnel and else) that is running on the same host. See https://github.com/nextcloud/all-in-one/blob/main/reverse-proxy.md
      NEXTCLOUD_DATADIR: /mnt/data/nextcloud/data # Allows to set the host directory for Nextcloud's datadir. ⚠️⚠️⚠️ Warning: do not set or adjust this value after the initial Nextcloud installation is done! See https://github.com/nextcloud/all-in-one#how-to-change-the-default-location-of-nextclouds-datadir
      NEXTCLOUD_UPLOAD_LIMIT: 10G # Can be adjusted if you need more. See https://github.com/nextcloud/all-in-one#how-to-adjust-the-upload-limit-for-nextcloud

volumes:
  # If you want to store the data on a different drive, see https://github.com/nextcloud/all-in-one#how-to-store-the-filesinstallation-on-a-separate-drive
  nextcloud_aio_mastercontainer:
    name: nextcloud_aio_mastercontainer # This line is not allowed to be changed as otherwise the built-in backup solution will not work
networks:
  caddy:
    external: true
  nextcloud-aio:
    external: true

FreeNAS 24.04
Docker version 27.3.1, build ce12230
Docker Compose version v2.29.7

You have multiple options:

  • either add the pihole network as external network to the caddy compose file, and attach the caddy container to it, and continue to use the caddy container ip.
  • since pihole publishes its ports, you can use a docker host ip (any en*, eth* or even docker0 interface should be fine) as dns in caddy.
  • configure a docker host ip (any en*, eth* or even docker0 interface should be fine) asf irst nameserver in /etc/resolv.conf, so that docker will use it as upstream server for the built-in dns server of user defined networks (does not require setting dns on the containers)

Personally, I would opt for the last option.

1 Like

Thank you for the advice mayay.

I believe the last option is close to the original setup I had that wasn’t working. The docker host LXC has a network bridge and was given its own IP on the LAN:
TrueNAS IP: 192.168.13.221 on br0
Docker Host IP: 192.168.13.78 on host0 (network-bridge=br0 in systemd-nspawn)

DNS queries go to 192.169.13.78:53 which is passed through to pihole. The router is handing out that full .78 address instead of its own 192.168.13.1.

On the TrueNAS machine:

root@bns-citadel:~# ip a
...
4: br0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
    link/ether da:46:79:e3:e6:1f brd ff:ff:ff:ff:ff:ff
    inet 192.168.13.221/24 brd 192.168.13.255 scope global dynamic br0
       valid_lft 56313sec preferred_lft 56313sec
...
root@bns-citadel:~# cat /etc/resolv.conf
nameserver 192.168.13.78
root@bns-citadel:~# nslookup video.berocs.com
Server:         192.168.13.78
Address:        192.168.13.78#53

video.berocs.com        canonical name = berocs.com.
Name:   berocs.com
Address: 192.168.13.78

root@bns-citadel:~# nslookup video.berocs.com 8.8.8.8
Server:         8.8.8.8
Address:        8.8.8.8#53

Non-authoritative answer:
video.berocs.com        canonical name = berocs.com.
Name:   berocs.com
Address: 136.53.157.170

And on the docker LXC:

root@docker:~# ip a
...
2: host0@if16: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
    link/ether 62:a7:76:b1:e1:ef brd ff:ff:ff:ff:ff:ff link-netnsid 0
    inet 192.168.13.78/24 metric 1024 brd 192.168.13.255 scope global dynamic host0
       valid_lft 69420sec preferred_lft 69420sec
...
root@docker:~# nslookup video.berocs.com
Server:         192.168.13.78
Address:        192.168.13.78#53

video.berocs.com        canonical name = berocs.com.
Name:   berocs.com
Address: 192.168.13.78

root@docker:~# nslookup video.berocs.com 8.8.8.8
Server:         8.8.8.8
Address:        8.8.8.8#53

Non-authoritative answer:
video.berocs.com        canonical name = berocs.com.
Name:   berocs.com
Address: 136.53.157.170

But from inside the caddy container (with that dns line removed from the compose file):

root@docker:/opt/stacks/caddy# docker exec -it caddy-caddy-1 sh
/ # cat /etc/resolv.conf 
# Generated by Docker Engine.
# This file can be edited; Docker Engine will not make further changes once it
# has been modified.

nameserver 127.0.0.11
options ndots:0

# Based on host file: '/etc/resolv.conf' (internal resolver)
# ExtServers: [192.168.13.78]
# Overrides: []
# Option ndots from: internal
/ # nslookup video.berocs.com
Server:         127.0.0.11
Address:        127.0.0.11:53

;; connection timed out; no servers could be reached

/ # 

There is a log entry under the docker systemd service:

Oct 04 12:55:56 docker dockerd[55]: time="2024-10-04T12:55:56.552193516-05:00" level=error msg="[resolver] failed to query external DNS server" client-addr="udp:172.28.0.3:32905" dns-server="udp:192.168.13.78:53" error="read udp 172.28.0.3:32905->192.168.13.78:53: i/o timeout" question=";video.berocs.com.\tIN\t AAAA" spanID=9c05c1704df3f52e traceID=51d0573ba7863fac8335721110a419d4

Setting the DNS server in the docker host to 172.17.0.1 to match docker0 gives the same behavior in the container.

Oct 04 13:02:01 docker dockerd[74]: time="2024-10-04T13:02:01.315574341-05:00" level=error msg="[resolver] failed to query external DNS server" client-addr="udp:172.28.0.6:59098" dns-server="udp:172.17.0.1:53" error="read udp 172.28.0.6:59098->172.17.0.1:53: i/o timeout" question=";video.berocs.com.\tIN\t A" spanID=68af5ac18ce409f6 traceID=6171d07acecf12bee9c46a051c110903

I’m pretty sure I’m missing something trivial. Would appreciate any further troubleshooting advice.

That’s ip of the built-in dns server in user defined bridge networks. I can’t tell you why it is not able to reach the upstream dns server. It should have worked.

I do remember that we had problems in the past with Docker in unprivileged LXC containers on Proxmox. But I don’t recall the details.

Btw. TrueNAS Core 20.10RC1 is available since a couple of days, with Docker as container runtime instead of k3s.