Host and Containers cannot communicate - MACVLAN

Hello,

I have containers on a server (Ubuntu Server 20.04.2 LTS; Docker 20.10.7) which are connected with MACVLAN in the same network (172.16.240.0/24) as the host (172.16.240.14) is. The containers have static IPs set during creation.
All containers can communicate with all hosts in 172.168.240.0/24 except the 172.168.240.14 where they are running on. Also all containers have internet access.
In reverse all hosts in 172.168.240.0/24 and all other networks except 172.168.10.14 can reach the containers.
Basically host and containers refuse to communicate.
UFW is disabled on the server, in the containers have no firewalls and no iptables in them.

iptables on the host is (auto-created during setup - no manual conig done):

Chain INPUT (policy ACCEPT)
target     prot opt source               destination
ACCEPT     all  --  anywhere             anywhere

Chain FORWARD (policy DROP)
target     prot opt source               destination
ACCEPT     all  --  anywhere             anywhere
DOCKER-USER  all  --  anywhere             anywhere
DOCKER-ISOLATION-STAGE-1  all  --  anywhere             anywhere
ACCEPT     all  --  anywhere             anywhere             ctstate RELATED,ESTABLISHED
DOCKER     all  --  anywhere             anywhere
ACCEPT     all  --  anywhere             anywhere
ACCEPT     all  --  anywhere             anywhere
ACCEPT     all  --  anywhere             anywhere             ctstate RELATED,ESTABLISHED
DOCKER     all  --  anywhere             anywhere
ACCEPT     all  --  anywhere             anywhere
ACCEPT     all  --  anywhere             anywhere

Chain OUTPUT (policy ACCEPT)
target     prot opt source               destination

Chain DOCKER (2 references)
target     prot opt source               destination
ACCEPT     tcp  --  anywhere             172.17.0.2           tcp dpt:9000
ACCEPT     tcp  --  anywhere             172.17.0.2           tcp dpt:8000
ACCEPT     tcp  --  anywhere             172.19.0.4           tcp dpt:http-alt

Chain DOCKER-ISOLATION-STAGE-1 (1 references)
target     prot opt source               destination
DOCKER-ISOLATION-STAGE-2  all  --  anywhere             anywhere
DOCKER-ISOLATION-STAGE-2  all  --  anywhere             anywhere
RETURN     all  --  anywhere             anywhere

Chain DOCKER-ISOLATION-STAGE-2 (2 references)
target     prot opt source               destination
DROP       all  --  anywhere             anywhere
DROP       all  --  anywhere             anywhere
RETURN     all  --  anywhere             anywhere

Chain DOCKER-USER (1 references)
target     prot opt source               destination
RETURN     all  --  anywhere             anywhere

CURL refuses with “no route to host” which is neither true nor possible. All configuration (IP, gateway, …) must be correct, otherwise the containers and the host wouldn’t have internet and other network access.
Also all containers can communicate with each other via the IPs from the MACVLAN.

ip route

172.16.240.0/24 dev ens160 proto kernel scope link src 172.16.240.14
172.16.240.1 dev ens160 proto dhcp scope link src 172.16.240.14 metric 100
172.17.0.0/16 dev docker0 proto kernel scope link src 172.17.0.1
172.19.0.0/16 dev br-83843de83969 proto kernel scope link src 172.19.0.1

ip route get 172.16.240.100 (IP of one of the containers)

172.16.240.100 dev ens160 src 172.16.240.14 uid 0
    cache

I didn’t find any hints on the internet other than years long history of unsolved similar problems.

Is it even possible? Or is this bug a “feature” and will never work?

The restriction is comming from the kernel, not docker itself: no MACVLAN child interface is allowed to communicate directly with its parent interface.

The solution is to add another virtual interface with its own ip and route the traffic thru this interface to the MACVLAN subnet range. Preferably this subnet range is a subnet with your lan’s subnet range, which should be excluded from the lan’s dhcp range. The host will be able to communicate with the container ips, though the containers will need to use the ip of the newly introduced virtual interface to communicate with the host.

See: Using Docker macvlan networks · The Odd Bit

Up to this day, I am facinated why people actualy feel the need to use macvlan. I never seriously used it in the last 7 years. Nor with plain docker, nor with docker swarm and not with kubernetes. Network broadcast is the only thing that would make me use macvlan, otherwise I would prefer bridge or overlay networks.

1 Like

Thank you @meyay for your explaination.
In my setup the Docker host is standalone and mostly I’m actually using bridge networking and mapping container ports to host ports.
But some containers must have their own IP address within the main network. Just mapping the ports to host ports (as in bridge or host mode) directly isn’t possible because they would be duplicate and cannot be changed (as a dependend application running on Docker host and other hosts don’t support configurable ports).
And overlay is according to Networking overview | Docker Documentation not meant for standalone Docker hosts.
The application running in those containers normally runs in a VM, but that small program in an extra VM seems like a waste of resources and therefore I chose Docker as it is easy to reinstall just be recreating the container and lightweight on system resources.

So the client application has the expected service ports hard coded? Uff! Then macvlan start to make sense.

Yep, It’s only available for swarm services. I prefer swarm services over plain containers, and only use docker-compose if capabilites are required that swarm service don’t provide (like privileged).

Yes, the client has the ports hardcoded.

For my usecase it’s totally sufficient if there is no failover and no scalability. So I use just basic containers. The whole infrastructure was migrated from VMs to Docker to save resources and therefore money and also be better manageable (via Portainer).

So I think there is no way around this restriction, as a second interface doesn’t work together with Portainer and even if, it would need trickery as the client expects the server to have a certain IP address (because it is delivered via DHCP) and another interface would have a different IP and would require manual configuration on one client (the Docker host) and therefore would cause even more problems as he chicken-egg-problem I have anyways with this software.

Sounds awfull… From what you wrote, I feel Macvlan won’t help with your situation either. The requirements are not realy a good match for docker.

If i am not mistaken LXD might be a better match in your situation, at least they claim they provide the user experience similar to a vm in lxd containers.

I hope you find a solution.