I’m running Debian Linux 12 on a Dell Optiplex 990 booted to UEFI and a very fast NVMe drive.
When I run my build using the bridge network, which installs Gems and, thus, makes many network calls, it runs fine for a minute or two, but eventually times out trying to get the next Gem. Using the Host network (--network host) works fine. This is true of any image I run.
Running the exact same installation on Debian Linux 12 on the exact same Dell OptiPlex 990 booted into the old BIOS using an old SATA 7800 drive works just fine with the Bridge network. Weird.
The only difference I can see is in the NVMe installation, Docker insists on loading the 8021q module for VLAN, but I’m not using VLAN at all. Could it be getting in the way? Disabling it doesn’t seem to be an option as it’s a Kernel module. My dmesg output shows nothing strange other than the usual:
[ 9.726158] Bridge firewalling registered
[ 9.791160] Initializing XFRM netlink socket
[ 44.217029] docker0: port 1(vethfc13c2d) entered blocking state
[ 44.217034] docker0: port 1(vethfc13c2d) entered disabled state
[ 44.217102] device vethfc13c2d entered promiscuous mode
[ 44.217166] docker0: port 1(vethfc13c2d) entered blocking state
[ 44.217168] docker0: port 1(vethfc13c2d) entered forwarding state
[ 44.217527] IPv6: ADDRCONF(NETDEV_CHANGE): vethc14b22f: link becomes ready
[ 44.217599] IPv6: ADDRCONF(NETDEV_CHANGE): docker0: link becomes ready
[ 44.416458] docker0: port 1(vethfc13c2d) entered disabled state
[ 44.416665] eth0: renamed from vethc14b22f
Although now that I see it, maybe the Bridge firewalling registered is an issue…
I’ve done some more investigation into this. My veth bridge network starts out fine like this:
21: veth3c6d0fd@if20: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master br-cb8921c8ea86 state UP group default
link/ether be:90:b0:89:ef:6b brd ff:ff:ff:ff:ff:ff link-netnsid 0
inet6 fe80::bc90:b0ff:fe89:ef6b/64 scope link
valid_lft forever preferred_lft forever
But after a minute, it appears to fail obtaining an ipv4 address, and is left with a 169 IP with a global keyword:
21: veth3c6d0fd@if20: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master br-cb8921c8ea86 state UP group default
link/ether be:90:b0:89:ef:6b brd ff:ff:ff:ff:ff:ff link-netnsid 0
inet 169.254.133.152/16 brd 169.254.255.255 scope global veth3c6d0fd
valid_lft forever preferred_lft forever
inet6 fe80::bc90:b0ff:fe89:ef6b/64 scope link
valid_lft forever preferred_lft forever
This works in Ubuntu by never obtaining an inet addr, which I assume is a failed DHCP call. Should the veth interfaces ever obtain an inet addr? If not, what can I do to disable the DHCP call?
Ah, my apologies. Thank you @rimelek. I was trying to figure out if this is a Debian issue or a Docker issue. In the future I will conform to proper guidelines.
No problem. For some reason it started to happen with Debian 12 recently and I wanted to suggest a search keyword to find the other cases, but I couldn’t find out what to search for. Fortunately I had the first issue in my bookmarks.
For my installation the Blacklist wasn’t being honoured for some reason. I ended up just using NetworkManager instead and removing connman which solved this.