Docker and network - ip/port binding issues

Hi all
Been struggling with some network issues on my home lab.

Current setup:
Got multiple Lenovo m710q tiny desktops running Ubuntu Server 22.04 patched and updated - the same for docker.
They have vlan800 on the nic in addition to the default vlan.
And they have a shim vlan to ensure that the hosts themselves can communicate with containers with dedicated static IPs. Those dedicated IPs are all assigned on the vlan800 network through the docker container configuration.
This works fine.

However, I now wish to deploy a Traefik container and I want to bind to a specific host IP and port on the vlan800 interface. For testing I use the whoami container on port 80 to make testing simpler.

However, it does not work.
When I deploy the container (with run or docker compose) I do -p 80:80 and I can access the whoami site on port 80 on the hosts primary IP, but none of the other IPs of the host. If I check netstat, I can see that docker-proxy process is bound to as expected.

if I bind to a host IP on the vlan800 network (-p :80:80), it shows up as bound in netstat and docker inspect seems to correctly show this as well. But no traffic is getting through.

If I set network_mode: host, all of the hosts IPs have the port enabled and I can access the whoami site.


root@srv51:/docker/compose/tvheadend # netstat -tulpn | grep LISTEN
tcp        0      0*               LISTEN      661531/docker-proxy 
tcp        0      0*               LISTEN      661519/docker-proxy

As can be seen here, the docker is listening on two IPs. First one is the hosts primary vlan1 IP and the second is the hosts IP on vlan800. The host only responds on the first one.

Trying a curl test from the host itself does not change anything for the above (i.e. this does not seem to be a routing issue). There is no firewall or iptables blocking active in any shape or form.

So, question is, when I bind to a specific vlan800 host IP and port on the vlan800 interface, why does the host not allow traffic through to the container, even if the process is clearly bound as expected? But traffic is allowed on the hosts primary IP on the default vlan?

I checked iptables too:

target     prot opt source               destination         
DOCKER     all  --              ADDRTYPE match dst-type LOCAL

Chain INPUT (policy ACCEPT)
target     prot opt source               destination         

Chain OUTPUT (policy ACCEPT)
target     prot opt source               destination         
DOCKER     all  --           !          ADDRTYPE match dst-type LOCAL

target     prot opt source               destination         
MASQUERADE  all  --           
MASQUERADE  all  --           
MASQUERADE  all  --           
MASQUERADE  tcp  --            tcp dpt:9443
MASQUERADE  tcp  --            tcp dpt:9001
MASQUERADE  tcp  --            tcp dpt:80

Chain DOCKER (2 references)
target     prot opt source               destination         
RETURN     all  --             
RETURN     all  --             
RETURN     all  --             
DNAT       tcp  --              tcp dpt:9443 to:
DNAT       tcp  --              tcp dpt:9001 to:
DNAT       tcp  --        tcp dpt:80 to:
DNAT       tcp  --        tcp dpt:80 to:

Seems correct, yes?

Added another IP on the host on the primary interface - This works without an issue. I can publish port 80 on this the same as with the .221 IP. No problem.
So this seems to be an interface problem of some sort.

Just to clarify:
I can ping all the host IPs from my various subnets.
I can access :80 on the container private IP ( directly from the host.
I can access :80 from another host, but only for the IPs that are on the default vlan (IPs & 201) - but not IPs on the vlan vlan800
I use Netplan interface management
I have policy-based routing and a shim interface configured, so that the other host IPs are reachable from other subnets. And that the host can communicate with containers using dedicated IPs on the vlan800 interface. Since I can ping IPs on the vlan800 interface (Whether IPs assigned with Netplan or configured by docker) from hosts in other subnets - routing is working as it should.

I’m quite puzzled… :slight_smile: :face_with_raised_eyebrow: