Problem with DNS resolution in container on bridge network

I’m running into a weird problem while running a container on the bridge network on an Ubuntu 24.04 host.
Any container running on the bridge network is unable to resolve DNS names.

sudo docker run -it --rm nicolaka/netshoot nslookup google.com
;; communications error to 1.1.1.1#53: timed out
;; communications error to 1.1.1.1#53: timed out
;; communications error to 1.1.1.1#53: timed out
;; communications error to 1.0.0.1#53: timed out
;; no servers could be reached
sudo docker run -it --rm nicolaka/netshoot cat /etc/resolv.conf
# Generated by Docker Engine.
# This file can be edited; Docker Engine will not make further changes once it
# has been modified.

nameserver 1.1.1.1
nameserver 1.0.0.1
search .

# Based on host file: '/run/systemd/resolve/resolv.conf' (legacy)
# Overrides: []

I’m able to ping 1.1.1.1 from inside the container, though:

sudo docker run -it --rm nicolaka/netshoot ping 1.1.1.1
PING 1.1.1.1 (1.1.1.1) 56(84) bytes of data.
64 bytes from 1.1.1.1: icmp_seq=1 ttl=55 time=21.2 ms
64 bytes from 1.1.1.1: icmp_seq=2 ttl=55 time=19.7 ms
64 bytes from 1.1.1.1: icmp_seq=3 ttl=55 time=20.8 ms

curl works if I git the IP address directly:

sudo docker run --rm -it nicolaka/netshoot curl --header 'Host: google.com' $(dig +short google.com)
<HTML><HEAD><meta http-equiv="content-type" content="text/html;charset=utf-8">
<TITLE>301 Moved</TITLE></HEAD><BODY>
<H1>301 Moved</H1>
The document has moved
<A HREF="http://www.google.com/">here</A>.
</BODY></HTML>

ufw is disabled

sudo systemctl status ufw
○ ufw.service - Uncomplicated firewall
     Loaded: loaded (/usr/lib/systemd/system/ufw.service; enabled; preset: enabled)
     Active: inactive (dead) since Fri 2025-01-10 02:32:20 UTC; 22min ago
   Duration: 20min 53.975s
       Docs: man:ufw(8)
    Process: 522 ExecStart=/usr/lib/ufw/ufw-init start quiet (code=exited, status=0/SUCCESS)
    Process: 8623 ExecStop=/usr/lib/ufw/ufw-init stop (code=exited, status=0/SUCCESS)
   Main PID: 522 (code=exited, status=0/SUCCESS)
        CPU: 4ms

Network is configured via netplan and looks like this:

sudo cat /etc/netplan/50-cloud-init.yaml
# This file is generated from information provided by the datasource.  Changes
# to it will not persist across an instance reboot.  To disable cloud-init's
# network configuration capabilities, write a file
# /etc/cloud/cloud.cfg.d/99-disable-network-config.cfg with the following:
# network: {config: disabled}
network:
    ethernets:
        enp2s0:
            addresses:
            - 192.168.1.52/24
            nameservers:
                addresses:
                - 1.1.1.1
                - 1.0.0.1
                search: []
            routes:
            -   to: default
                via: 192.168.1.1
    version: 2

I did a quick tcpdump on the docker0 interface and saw that while the request was going out of the docker0 interface, no responses were coming back in.

sudo tcpdump -i docker0 port 53
tcpdump: verbose output suppressed, use -v[v]... for full protocol decode
listening on docker0, link-type EN10MB (Ethernet), snapshot length 262144 bytes
02:57:29.925524 IP 172.17.0.2.41212 > one.one.one.one.domain: 10115+ A? google.com. (28)
02:57:34.931551 IP 172.17.0.2.38413 > one.one.one.one.domain: 10115+ A? google.com. (28)
02:57:39.933246 IP 172.17.0.2.35484 > one.one.one.one.domain: 10115+ A? google.com. (28)
02:57:44.938660 IP 172.17.0.2.36709 > one.one.one.one.domain: 10115+ A? google.com. (28)

Any help to resolve this would be greatly appreciated.

Any VPN, PiHole or Adguard involved?

No VPN. It’s a VM that I just spun up on my VMWare ESXi server. What I noticed is that if I create a new custom bridge, DNS resolution works.