Update to docker-ce 28.2.2 breaks bridge networking to container

Hi,

I am running a small machine with docker and a small number of containers. My network uses IPv6 and I hate NAT; thus I am using bridges to couple the containers to the net. The machine runs Debian stable (bookworm), and I use docker-ce from “deb https://download.docker.com/linux/debian bookworm stable”.

Here is the docker compose file for my mosquitto instance:

networks:
  br196:
    name: br196
    external: true

volumes:
  data:
  log:

services:
  mosquitto_app:
    image: eclipse-mosquitto:latest
    dns:
      - "192.168.181.53"
    networks:
      br196:
        ipv4_address: 192.168.196.132
        ipv6_address: 2a01:238:43fa:bc96:16f:0:6f:203
    volumes:
      - ${PWD}/mosquitto.conf:/mosquitto/config/mosquitto.conf
      - data:/mosquitto/data
      - log:/mosquitto/log
    restart: always
    logging:
      driver: json-file
      options:
        max-size: "10m"
        max-file: "3"

On friday, the docker-ce update from 28.1.1-1 to 28.2.2-1 broke networking of this container. Going back to 28.1.1-1 fixes the issue, so it must be an issue with docker-ce.

When the container starts with docker-ce 28.2.2-1, I see the request frames (both TCP and ping) come in on the ethernet interface but they get lost somewhere in the system. Not even the iptables counter on the FORWARD chain see those packets.

interface configuration, routing tables, sysctl configuration (forwarding, rp_filter) and iptables contents are identical with both versions. It works with docker-ce 28.1.1-1, not with 28.2.2-1.

Is this a known bug? Am I doing things wrong that just happened to work with the older docker-ce version?

Greetings,
Marc

We are getting the similar, ours was something related to systemd-resolver as we were able to get it back up by fudging the resolv.conf but dropping back to 28.1.1 fixed it …

Over here, I can rule out a DNS problem. The container doesn’t answer to ping, with the new docker-ce package the ICMP echo requests don’t get to the container in the first place.

Ok, sounds slightly similar because we couldn’t get any name resolution for containers and lost 127.0.0.53 and other systemd services.

28.2.2 seems to break networking in some way.

Have you checked if the issue has already been reported?

you could verify whether your issue is similar to mine:

  • run tcpdump on both your host’s external interface and the bridge interface pointing towards your container
  • try to reproduce the issue

If your problem is the same, you would see the DNS query going out of the external interface, and the response coming in, but the response would never be seen on the bridge.

Greetings
Marc

Pardon my ignorance, why would I report that issue to the moby project? Does docker-ce use moby in some way that I am not aware of?

Moby is the upstream open source project the docker-ce packages base on.

For what it’s worth, I ran into this problem. I’m using Vagrant with the Docker provider, so the specifics are different, but the effect is the same.

I was able to fix it with the following in my Vagrantfile

config.vm.network :private_network,
    docker_network__opt: com.docker.network.bridge.gateway_mode_ipv4=nat-unprotected

I recognize that isn’t directly applicable for docker-compose, but maybe someone will be able to translate it appropriately.

1 Like

Thank you for sharing your solution. Here is the syntax for compose in another topic:

Just in case it is related, I also found this in a moby discussion:

Direct routed access to container ports that are not exposed using p /-publish is now blocked in the DOCKER iptables chain. moby/moby#48724 * If the default iptables filter-FORWARD policy was previously left at ACCEPT on your host, and direct routed access to a container’s unpublished ports from a remote host is still required, options are:

  • Publish the ports you need.
  • Use the new gateway_mode_ipv[46]=nat-unprotected, described below.