Need help - docker macvlan static ip - yet a dhcp request

Hi, first of all, I hope this is the right place / category for such a question.

I’m trying to run 2 docker containers on my Raspberry Pi with each having it’s own static ip address.
So far, I got it working using the macvlan network driver.
But there’s this strange behaviour: Whenever I start a container, my router (Fritz!Box 7590) is reporting a new network device and assigns a new ip to it. The container is reacheable on it’s static ip and the automatically assigned ip is just unused and fills up the DHCP leases. Every (re-)start, a new unused ip.

How can I stop these unnecessary DHCP requests?
I’m just getting warm with docker, any help is appreciated!

My specific network and container configuration (my router’s home network is on 192.168.2.x):

docker network create -d macvlan --subnet=192.168.2.0/24 --gateway=192.168.2.1 -o parent=eth0 home_network

docker run -d --name "diyHue" -v '/mnt/hue-emulator/export/':'/opt/hue-emulator/export/':'rw' -e MAC='02:42:AC:F8:1B:B8' -e IP='192.168.2.241' -p 80:80/tcp -p 443:443/tcp -p 1900:1900/udp -p 2100:2100/udp -p 1982:1982/udp --ip 192.168.2.241 --mac-address 02:42:AC:F8:1B:B8 --network home_network --restart always diyhue/core:latest

docker run --init -d --name="home-assistant" -v /home/pi/homeassistant:/config -v /etc/localtime:/etc/localtime:ro --network home_network --ip 192.168.2.242 --mac-address 02:42:AC:11:CE:10 --restart always homeassistant/raspberrypi3-homeassistant

Hi, I experienced this exact same problem and never found a solution for it.
Every time I add another container to my macvlan or reboot one of the existing, my router receives another DHCP lease with a random MAC address and the the hostname of my docker host.
This makes my docker host unreachable by its name, since DNS now has several entries for all these random MAC addresses.

I found out, that it seems to be Raspberry Pi related. Tested this on a Pi 2 und Pi 4 with different kernels (4.19.118 to 5.4.83) of Raspbian / RaspiOS and HypriotOS aswell. Allways the same behaviour.
But when I run the same configuration on a PC hardware with Debian 4.19.118 as a docker host, this issue does NOT occur.

Any ideas for further troubleshooting are appreciated :slight_smile:

@dreiekk did you ever solve this issue or found a workaround?

Thanks!
Kind regards
Thomas

Hi - and warming up this topic again,

apart from “I have the same problem / behaviour” there’s nothing much to add from my side.
This seems also to be a problem of the router (Fritz!Box in this case).

The problem also occoured for me with a Reolink-Webcam which triggered additional DHCP-Requests. Neither Reolink nor AVM had been able to resolve this issue, so I already was aware of additional hosts in the list of “known-hosts” for the FritzBox when my Docker-Host (RaspBerry 4b) started to add more and more hosts to the list.

It is true, that whenever a container get’s started (regardless the network it is in), a new address is issued to a non-existent MAC-Address.

My best guess is, the Interface of the raspberry (and the Reolink) is behaving like a switch which tries to advertise a new host in the network. The FritzBox seems to react quite quick and reserves an address for it, seeing but just the Hostname “raspberry” as the requestor (thus, adding the address to this hostname).

If you ask me - AVM should take care of this, but they do not have the mood to solve these kind of problems. I already involved them in the problems I had with the Reolink-Camera, but they stated to ask Reolink about it - which I did. Long story short - Reolink said, their cameras test (from time to time) if they are still connected to a network, kinda probing it. This also seems to trigger a DHCP-Issuing on the Fritz!Box-side.

Conclusion? Is Fritz!Box-DHCP a little “trigger-happy”? Probably.

If I have time, I will create a support-ticket again at AVM. This time for the combination FritzBox <-> Docker-Host. Hopefully this issue will be recognized by them as espacially in the docker-environment it’s a hassle to delete the additional hosts after a reboot of either the host or the container(s).

Will keep you posted, if AVM reacts to this :slight_smile:

Best regards,
Martin!

The clean solution is to create the docker macvlan network using an ip-rang within your subnet that is outside the dhcp range of your router.

Why would anyone expect the network’s dhcp server to not respond to dhcp requests of a macvlan client interface? Network-wise a macvlan client interface appears like a standalone computer. Why would it be an issue of the network’s dhcp server, that a docker macvlan network provides it’s own dhcp service?

Older versions of the docker macvlan documentation provided hints that the ip-range should be outside the network dhcp range AND that the kernel prevents direct communication between a macvlan host interface and the macvlan client interfaces. None of both details can be found in the docs anymore, even though they still apply…

If someone would want to consider this situation an issue, then it’s a docu bug, due to missing, but relevant information.

Hey Meyay,

thanks for your answer. Indeed, the docker macvlan is configured the way you mentioned. The IPV4 IP range is set to 192.168.178.224/27 which is ABOVE the range my Fritz!Box (router) is assigning IP-Addresses (192.168.178.20 - 200)

But for some reason - that’s why I involved AVM-Support also - the router gets aware of docker containers being started and sends a DHCP-offer (in the ROUTER’s DHCP-range) to the raspberry (the docker-host!) which never is used of course by the raspberry as it is set to a fixed IP.

I personally think that in fact the router and it’s firmware seems the problem here, as I stated this happend to complete different hard- and software as well (although Reolink might use docker technology inside their firmware, who knows?).

Initial answer from AVM-Support was like: “Configure your DHCP with shorter lease time” and “Do not set a fixed IP for your docker-host (raspberry) but let the router use a reservation for it.”
Also there was the statement “We do not have experience with docker…”.

In sum, I fear that there’s no big help here from AVM at all, but settings for macvlan is what I can confirm to be like you mentioned.

I could try a complete different subnet for my docker-environment (e.g. 10.10.10.0/24) and set a static route on the router to this subnet if this annoys me too much in future, but it would have been great to see, WHY this happens.

Best regards

I am using Fritzboxes since ages and can’t say that I share your experience.

Back in the days, I had a 3 node master-only swarm cluster and I used maclan without having those implications. The cluster nodes used static ip’s outside the dhcp range, the macvlan range was outside the dhcp range. The containers appeared like every other host on in the FB’s network overview. I must have had a 7590 then as well, as I bought it right away when in entered the market, which more or less matches the time when swarm mode was introduced in Docker 17.03.

Though, what realy caused problems on the FB was keepalived and it’s failover-up behavior!
I manually added an ip (probably somewhere in the portforwarding rule creation dialog, with a mac address!) and manualy assigned it the hostname “swarm” and used it as failover ip on my nodes “swarm1”, “swarm2”, “swarm3”. This worked more or less reliable, except that sometime the ip’s of the node names flapped from the node ip to the failover ip on whichever node that was the current keepalived master.

I can check what happens if I run macvlan with my 6591 the next days. I just have to find an ip-range within my subnet that suits for a proper test.

On a 2nd though, maybe I haven’t had the issue because I assigned static ips uing the ipv4_address setting (only works for plain docker-compose deployments). I did setup macvlan for swarm as well, but didn’t like the idea that a swarm service do not allow to set the ipv4_address setting.

Note: I stopped using macvlan after my experiments, as my use cases do work like a charm without. Instead I use pihole to manage domains that resolve to my nodes and use a containerized reverse proxy that forwards traffic of a domain to a specific containerized service.

Hmm, I wonder if your problem is caused by the choise of your ip range? 192.168.178.224/27, if considered as a subnet, would have the same broadcast ip (192.168.178.255), as your local 192.168.178.0/24 network.

Is it possible that this causes the DHCPDISCOVER broadcast message to be received not only from the docker network build in dhcp, but also from the FB?

Have you tried a different network segment? I had mine of 192.168.x.64/27 - where the broadcast ip is 192.168.178.95, which has no specialy meaning in my 192.168.x.0/24 network. My dhcp range is from 192.168.x.100. to 192.168.x.240.

Great thought!
The Broadcast SHOULD be the same here, you’re right.
I will try out other segments as you mentioned to avoid .255 as broadcast.

Also thanks for the load of input about your experiences! Very helpful for me as I just getting into more complex and fancy setups in my environment at the moment!

Will report back.

I should have be more specific. The domains resolved to the keepalived failover-ip and not to the individual node ips. Also WAN-Portforwarding of port 80 and 443 was directed to the failover-ip. I never had to worry if one of my master nodes was down… and the cluster took care of self healing services in case a node had an unplaned outage.

Its a pitty that a solution like metallb does not exist for swarm - it would make keepalived obsolet, as it uses the same method to manage a failover ip in a cluster… but without adding extra maintanance overhead.

Hi,

I experience the exact same behaviour. RPi4 + Docker with macvlan and every Container sends out a DHCPDISCOVER on each startup. I found no working solution and I think its simply a bug. I cannot see the (random) MAC (and aggregated IP) anywhere else after the initial DHCP communication.

I traced it down with Wireshark. The only Workaround for me is (using a Mikrotik CRS) to create a Switch ACL, which filters (drops) each DHCP-Request (UDP, Port 67-68) coming from the port the RPi is connected to.

BTW: This is no AVM specific problem, the initial DHCPDISCOVER (from the starting Container with a random MAC) is RFC conform and the FritzBox answers to it, as dictated by the RFC. Other DHCP-Server will behave here the exact same way.

In conclusion, I see no way to prevent this without a managed Switch, which let you filter out the DHCP-request. Maybe the built-in IPtables firewall in the host-linux could do this too, but Im not 100% sure about this (initial DHCP-requests run at Layer2, a Firewall usually at Layer3). To me - using a Mikrotik CRS - this is the most clear way: ACL → match UDP Port 67 or 68 → DROP

I am not sure if this would help, but seeing it’s a problem together with AVM Fritzbox, I presume it is not a large-scale installation.
What I have done so far is make sure that the stuff works as I would expect:

make sure the macvlan kernel module ist loaded:

lsmod | grep macvlan

and if not load it:

sudo modprobe macvlan

define a macvlan network in compose yaml:

networks:
  lan:
    name: lan
    driver: macvlan
    driver_opts:
      parent: eth0 #your ethernet interface
    ipam:
      config:
        - subnet: 192.168.3.0/16 # I use the same subnet as my LAN router.

define a service in compose yaml with predefined MAC address:

version: "3"
services:

  openhab:
    image: openhab/openhab:${OH_VERSION:-3.2.0.M3}
    mac_address: 02:42:ac:11:65:45
    networks:
      lan:
        ipv4_address: 192.168.3.220
    container_name: openhab-pro
    hostname: openhab-pro

Prior to starting the service go to AVM Fritzbox and define a fixed IP allocation with the predefined MAC address:

I hope this helps in any way.

openhab-pro is also resolved by the AVM DNS server