Docker containers on Ubuntu 24.04 cannot reach external network

Hello everyone,

I’m experiencing a very persistent and perplexing networking issue with Docker containers on my fresh Ubuntu 24.04 installation. My containers can reach the docker0 bridge, but cannot access the host’s local network gateway (192.168.180.1) or the internet (8.8.8.8, google.com).

I have performed extensive diagnostics and ruled out many common causes. I’m hoping someone here might have encountered a similar issue or can offer new insights.

System Information:

  • OS: Ubuntu 24.04 LTS (Desktop) - Recently fresh installed.
  • Kernel Version: 6.11.0-26-generic
  • Docker Version: 28.2.2 (tried also 28.1.1)
  • Docker Installation Method: Installed via official APT repository. Multiple attempts to reinstall Docker have been made, but the issue persists.
  • Network Interface: Wi-Fi (wlp0s20f3)
  • docker0 bridge IP: 172.17.0.1/16

Problem Symptoms:

  • From inside a busybox container:
    • ping 172.17.0.1 (docker0) WORKS
    • ping 192.168.180.1 (local gateway) FAILS (100% packet loss)
    • ping 8.8.8.8 (external DNS) FAILS (100% packet loss)

Diagnostic Steps Taken & Observations:

  1. IP Forwarding:
  • sysctl net.ipv4.ip_forward returns net.ipv4.ip_forward = 1
  1. iptables Rules:
  • sudo iptables -L -v -n and sudo iptables -t nat -L -v -n show standard Docker rules.
  • The MASQUERADE rule for 172.17.0.0/16 on POSTROUTING is present and active.
  • No obvious blocking rules were found in the filter table.
  1. tcpdump Analysis (crucial findings):
  • On docker0 (inside container: ping 8.8.8.8):
  • sudo tcpdump -i docker0 -n icmp shows 172.17.0.2 > 8.8.8.8: ICMP echo request. (Outgoing packets are seen).
  • NO ICMP echo replies from 8.8.8.8 are ever seen on docker0.
  • On wlp0s20f3 (host’s external Wi-Fi interface):
  • sudo tcpdump -i wlp0s20f3 -n icmp shows:
    • Outgoing 192.168.180.48 > 8.8.8.8: ICMP echo request (confirming NAT is working).
    • Incoming 8.8.8.8 > 192.168.180.48: ICMP echo reply (confirming replies reach the host).
    • Critically: Host sends 192.168.180.48 > 8.8.8.8: ICMP time exceeded in-transit back to 8.8.8.8 for these reply packets. This indicates the host receives the reply but fails to forward it back to the container before TTL expires.
  1. AppArmor Investigation (ruled out as cause):
  • sudo aa-status showed docker-default profile in enforce mode.
  • Could not find the docker-default profile file in /etc/apparmor.d/.
  • sudo aa-disable docker-default and sudo aa-complain docker-default failed (looking for an executable path).
  • Attempted to create /etc/docker/daemon.json with "apparmor-profile": "unconfined": This caused Docker daemon to fail to start (even with valid JSON syntax and permissions).
  • Removed daemon.json, Docker restarted, but the networking problem persisted.
  • Crucial Test: Globally disabled AppArmor via GRUB (apparmor=0 in GRUB_CMDLINE_LINUX_DEFAULT) and rebooted.
  • sudo aa-status showed apparmor module is loaded but apparmor filesystem is not mounted (confirming it’s effectively disabled).
  • Problem still persisted: Containers still could not ping external IPs or gateway. This definitively rules out AppArmor.
  1. Other Firewall/Network Managers:
  • sudo ufw status shows Status: inactive.
  • NetworkManager is active, systemd-networkd is inactive.
  1. Routing Table & Docker0 Interface:
  • ip route show and ip addr show docker0 outputs confirmed that the docker0 bridge is correctly configured with 172.17.0.1/16 and the kernel has the correct route for 172.17.0.0/16 pointing to docker0.
  1. conntrack Table:
  • Cleared the conntrack table (sudo conntrack -F) after stopping Docker, then restarted Docker. Problem still persisted.

Can you try if attaching a container to a user defined network (docker network create <networkname>) and attaching the container to --network <networkname> suffers from the same problem?

Docker compose project deployments always create a user defined network. Only docker run without --network ... used the default bridge.

Thank you for your suggestion! I’ve tested running a container on a user-defined network, but unfortunately, the problem persists with the exact same symptoms.

I just remembered that had a topic about it a couple of days ago: Update to docker-ce 28.2.2 breaks bridge networking to container

I could only find a reported issue about the breakage with swarm overlay networks: 28.2.0+ iptables manipulation breaks network on a docker swarm platform · Issue #50129 · moby/moby · GitHub

I’ve tried downgrading Docker to version 28.1.1-1, but the problem persists. Containers still cannot ping external networks. This indicates the issue is not specific to the 28.2.x Docker versions.

Hello everyone,

I’m incredibly happy to report that the root cause of the problem has been identified and resolved!

After extensive troubleshooting, including downgrading Docker, verifying iptables and br_netfilter, the issue remained. The key to understanding the problem was the ICMP time exceeded in-transit messages we observed in tcpdump.

The final crucial diagnostic step (suggested offline) was to connect my system to a different network environment – specifically, my mobile hotspot.

Here are the tcpdump results from my wlp0s20f3 interface, interface showing the difference that led to the solution.


1. tcpdump output when connected to the problematic Wi-Fi network (TTL=1 issue):

tcpdump: listening on wlp0s20f3, link-type EN10MB (Ethernet), snapshot length 262144 bytes
10:53:49.589520 IP (tos 0x0, ttl 63, id 55966, offset 0, flags [DF], proto ICMP (1), length 84)
192.168.180.48 > 8.8.8.8: ICMP echo request, id 9, seq 0, length 64
10:53:49.617033 IP (tos 0x0, ttl 1, id 0, offset 0, flags [none], proto ICMP (1), length 84)
8.8.8.8 > 192.168.180.48: ICMP echo reply, id 9, seq 0, length 64
10:53:49.617074 IP (tos 0xc0, ttl 64, id 58242, offset 0, flags [none], proto ICMP (1), length 112)
192.168.180.48 > 8.8.8.8: ICMP time exceeded in-transit, length 92
IP (tos 0x0, ttl 1, id 0, offset 0, flags [none], proto ICMP (1), length 84)
8.8.8.8 > 192.168.180.48: ICMP echo reply, id 9, seq 0, length 64

Observation from this output: The incoming ICMP echo reply packets from 8.8.8.8 consistently had a TTL of 1. This caused my host to send ICMP time exceeded in-transit messages when attempting to forward the packet internally to the Docker bridge, as TTL would decrement to 0.


2. tcpdump output when connected to a mobile hotspot network (TTL is normal):

tcpdump: listening on wlp0s20f3, link-type EN10MB (Ethernet), snapshot length 262144 bytes
11:02:43.006938 IP (tos 0x0, ttl 63, id 11785, offset 0, flags [DF], proto ICMP (1), length 84)
192.168.123.124 > 8.8.8.8: ICMP echo request, id 11, seq 0, length 64
11:02:43.086475 IP (tos 0x0, ttl 117, id 0, offset 0, flags [none], proto ICMP (1), length 84)
8.8.8.8 > 192.168.123.124: ICMP echo reply, id 11, seq 0, length 64

Observation from this output: The incoming ICMP echo reply packets from 8.8.8.8 now have a normal TTL of 117 (instead of 1). My host no longer sends “ICMP time exceeded in-transit” messages.


Conclusion:

Crucially, with the normal TTL, ping 8.8.8.8 from inside the Docker container now works perfectly!

This definitively confirms that my system and Docker setup are fully functional under normal network conditions. The persistent issue was caused by the specific network I was previously connected to. That network (likely a router, firewall, or some other device) was somehow reducing the TTL of incoming packets to 1, preventing them from surviving the internal hop from my host’s main interface to the Docker bridge.

I will need to investigate the configuration of that network or simply avoid using Docker on it.

added: sudo iptables -t mangle -I PREROUTING -j TTL --ttl-inc 2 works in this network

3 Likes

This is a new one. Thank you for sharing your insights!