No longer able to access local IPs in rootless Docker after update

I have a home server set up with a number of services, most of which are running in rootless Docker containers. I wanted some of these services to be accessible on the local network with custom subdomains and HTTPS, so I have Caddy set up as my web server in another container, networked directly to the relevant services. Each subdomain is pointed to a container and port via reverse proxy, like so:

subdomain.example.com {
    reverse_proxy service:8080
}

AdGuard Home is running directly on the host as the DNS server for the local network. Each subdomain is configured in AGH with a DNS rewrite that points to the local IP of the host, so when the subdomain is accessed on the local network, AGH sends the traffic back to the local Caddy install, and the subdomain works as expected.

I recently installed a set of Docker updates on my system, and with no other changes to my configuration, the subdomain for AdGuard Home has stopped working. Because AGH is running directly on the host, it’s the only subdomain that’s configured with a local IP instead of a container name:

adguard.example.com {
    reverse_proxy 192.168.50.123:8080
}

This worked without a hitch before the updates, but since installing them two days ago, attempts to access this subdomain time out with no response. The configuration hasn’t changed, and I can still access AdGuard Home from the IP and port directly. My server dashboard (running in a Docker container) also shows API errors on widgets that are attempting to connect to subdomains on the local network. All of the subdomains themselves still work–Docker just won’t connect to them anymore.

Things I’ve tried so far to fix the issue:

  • Changing the port driver to either slirp4netns or implicit (with the pasta network driver). Breaks every single local subdomain, including the ones that are currently working. AdGuard Home shows the DNS rewrites taking place, but the traffic never reaches Caddy, so it’s not clear to me where the breakage is happening.
  • Enabling host loopback via the DOCKERD_ROOTLESS_ROOTLESSKIT_DISABLE_HOST_LOOPBACK environment variable, then replacing the local network IP in the subdomain configuration with 10.0.2.2. IP responds to pings within the container, but connections are refused for wget and Caddy.
  • Adding a mapping with extra_hosts to the Docker Compose file and accessing the host at host.docker.internal. Same result as previous.

No change when adding Docker’s bridge network IP range to the firewall allow list or disabling the firewall entirely.

Any ideas as to what might be happening here and how to fix it? This is not the first time that a Docker update has completely broken a working setup, but it’s the first time that I’ve been stuck on a fix for this long. :frowning_face:

Thanks in advance to anyone who can help!

Edit: Since a Docker update was what introduced this whole problem, it occurred to me that some version info might be helpful… :person_facepalming:

Running Debian Bookworm. The following packages were updated on the 7th: docker-buildx-plugin docker-ce docker-ce-cli docker-ce-rootless-extras docker-compose-plugin

$ docker --version
Docker version 26.1.4, build 5650f9b

$ docker compose version
Docker Compose version v2.27.1

I managed to get host loopback working by placing the following file at ~/.config/systemd/user/docker.service.d/override.conf and restarting the Docker service:

[Service]
Environment="DOCKERD_ROOTLESS_ROOTLESSKIT_DISABLE_HOST_LOOPBACK=false"

This allowed me to swap the 192.168.* IP for 10.0.2.2 in my Caddy config, so the AdGuard Home subdomain is working again. This, of course, didn’t fix the server dashboard’s failed API calls, since the dashboard application still can’t resolve the local subdomains correctly. Instead, I networked the dashboard application directly with the applications for which it needs API access, then swapped out the subdomains as needed. So those calls are working, too.

The big holdout here is the OctoPrint widget on my dashboard–OctoPrint is running on a Raspberry Pi elsewhere on my network, which wasn’t a problem when Docker was resolving local IPs and subdomains correctly. I don’t really see any way to make this machine accessible without either making the 192.168.* IP range on my LAN visible to Docker (ostensibly impossible as of these updates) or jumping through some hoops. Do I need to add a WireGuard container to the stack and tunnel through just to access another machine on my LAN? I’d love to know if there’s a better way–and I’d really like to understand what kind of bug or loophole was fixed that broke my setup like this!

Just following up to conclude that I never found a solution for this–all 192.168.* IPs are still broken within my containers. I ended up using Tailscale to access the Raspberry Pi on my LAN, which somehow works even though Tailscale is only running on the host… :person_shrugging:

It seems like no one really has the answer to what happened here, but if anyone should stumble across this who has an idea, I’d still love to know!