Docker Container on Alternative VLAN & Subnet

My docker container sits on a different subnet and VLAN to my docker host.

Docker host 192.168.5.201
Subnet mast 255.255.255.0
Default Gateway 192.168.5.1
VLAN 0

I need to place my TVheadend container on a different subnet so that the containers IP address sits on
TVheadend container 192.168.30.201
Subnet mast 255.255.255.0
Default Gateway 192.168.30.1
VLAN 300

TVHeadend works fine and with it being on VLAN300 with the 192.168.30.201 IP address I can use policy based routing on my router to send traffic via a different VLAN route. I can connect to it from all hosts on my network EXCEPT the docker host itself, The problem being that I need to connect to port 9981 on the container from the host to access the API of the web interface.

When I try to ping the address (which also has a DNS name of tvh.lan) I get the following:

ping tvh.lan
PING tvh.lan (192.168.30.201) 56(84) bytes of data.
From tvhdocker.lan (192.168.30.130) icmp_seq=1 Destination Host Unreachable
From tvhdocker.lan (192.168.30.130) icmp_seq=2 Destination Host Unreachable
From tvhdocker.lan (192.168.30.130) icmp_seq=3 Destination Host Unreachable

Now I believe the issue is that the host sits on the other sunset and the docker host has no route. It tries to use its local private IP which causes issues as the private IP address are not accessible.

Here is an extract of the relevant sections of my docker-compose.yml:
docker-compose.yml (extracts)

version: '3'

services:

  tvheadend:
    image: lscr.io/linuxserver/tvheadend:latest
    container_name: tvheadend
    environment:
      - PUID=1000
      - PGID=1000
      - TZ=Europe/London
      - RUN_OPTS=
      #- RUN_OPTS=--bindaddress 192.168.5.201 --http_port 192.168.5.201:9981 --htsp_port 192.168.5.201:9982
      #- RUN_OPTS=--http_port 9983 --htsp_port 9984
    volumes:
      - /mnt/usbhdd/pvr/config:/config
      - /mnt/usbhdd/pvr/recording:/recording
      - /mnt/usbhdd/pvr/m3u:/m3u
      - /mnt/usbhdd/pvr/timeshift:/timeshift
      - /mnt/usbhdd/pvr/scripts:/scripts
    ports:
       - 9981:9981
       - 9982:9982
#      - 192.168.5.201:9981:9981
#      - 192.168.5.201:9982:9982
    networks:
       lanvpn:
          ipv4_address: "192.168.30.201"
    devices:
      - /dev/dvb:/dev/dvb
    privileged: true
    restart: unless-stopped

#networks:
#  proxy:
#    external: true

secrets:
  my_secret:
    file: ./secrets.yaml

# set trusted docker internal network
networks:
  default:
      ipam:  
        config: 
         - subnet: 192.168.0.0/24
  lanvpn:
    driver: macvlan
    driver_opts:
      parent: eth0.300
    ipam: 
      config:
        - subnet: 192.168.30.0/24
          gateway: 192.168.30.1
#  app-net:
#      ipam:  
#        config: 
#         - subnet: 192.168.40.0/24

You can see I am using macvlan to put the container on VLAN 300.f I try to connect to the container from a normal client on the LAN, say from 192.168.5.100, then it connects fine. I think the issue is that its is trying route via the local docker private network of 192.168.30.130 instead of via the gateway that connects the two networks.

Here is the route information from the docker host:

oot@pvr:~# route
Kernel IP routing table
Destination Gateway Genmask Flags Metric Ref Use Iface
default router.lan 0.0.0.0 UG 202 0 0 eth0
default FriendlyWRT.lan 0.0.0.0 UG 484 0 0 eth0.300
link-local 0.0.0.0 255.255.0.0 U 312 0 0 veth87af8d1
link-local 0.0.0.0 255.255.0.0 U 316 0 0 veth5291032
link-local 0.0.0.0 255.255.0.0 U 318 0 0 vethe788551
link-local 0.0.0.0 255.255.0.0 U 321 0 0 veth6a07a3a
link-local 0.0.0.0 255.255.0.0 U 323 0 0 vethd79c34f
link-local 0.0.0.0 255.255.0.0 U 325 0 0 vethfd76b61
link-local 0.0.0.0 255.255.0.0 U 327 0 0 veth792529a
link-local 0.0.0.0 255.255.0.0 U 483 0 0 vethcefe2ce
172.17.0.0 0.0.0.0 255.255.0.0 U 0 0 0 docker0
192.168.0.0 0.0.0.0 255.255.255.0 U 0 0 0 br-2add0ff982d3
192.168.5.0 0.0.0.0 255.255.255.0 U 202 0 0 eth0
192.168.30.0 0.0.0.0 255.255.255.0 U 484 0 0 eth0.300

Now, when I add a static route on the docker host for that one container it works, but is that the correct thing to do? And will the route persist on reboot?

ip route add 192.168.30.201 via 192.168.5.1 dev eth0

How should the container be configured to be presented on the correct VLAN and subnet and be routed correctly from the docker host and all hosts on the local lan?


Please, format your post according to the following guide: How to format your forum posts
In short: please, use </> button to share codes, terminal outputs, error messages or anything that can contain special characters which would be interpreted by the MarkDown filter. Use the preview feature to make sure your text is formatted as you would expect it and check your post after you have sent it so you can still fix it.

Example code block:

```
services:
  service1:
    image: image1
```

updated - sorry I have now put the configs within quotes

While at it: can you fix the compose file content as well, please? Indentation has semantics, and your pasted snippets has no indentation at all.

Hi Meyay - Sorry the paste feature is not great in the forum tools and I only wanted to share some of the content. Will attempt to update but the issue should not be impacted by the symantecs of the compose file

We still prefer valid yaml content, as It’s cumbersome to read it, if it’s not properly formatted and can hide other problems with the compose file content.

The forum search should have found a couple of useful topics about macvlan.

Most likely this is what causes the problem.

This solution should appy to your case as well.

Thank you Meyay

So in my setup, my tvheadend container has a macvlan interface of 192.168.30.201. Its DNS name on the network is tvh.lan. I would like the host to be able to communicate with the container like any other device on the 192.168.5.0/24 network, whereby communications are routed via my OpenWRT router which effectively connects the two networks. But the docker host does not have that route like all other hosts due to the existence of the macvlan interface and what appears to be an IP address allocated by DHCP of 192.168.30.130 on the host. If I look at ifconfig of the host I see this:

eth0.300: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
inet 192.168.30.130 netmask 255.255.255.0 broadcast 192.168.30.255

The macvlan in docker-compose appears to have created a virtual interface on the host which has been allocated a DHCP address, When I try to ping 192.168.30.201 I get a reply back from 192.168.30.130 saying

From 192.168.30.130 icmp_seq=1 Destination Host Unreachable

If I didn’t want to communicate the to the container from the host directly then I can add a static router that one host with:
ip route add 192.168.30.201 via 192.168.5.1 dev eth0

Then when I ping I get ta route and a reply:
ping 192.168.30.201
PING 192.168.30.201 (192.168.30.201) 56(84) bytes of data.
64 bytes from 192.168.30.201: icmp_seq=1 ttl=63 time=1.11 ms

Is the best option to make that static host router permanent and use the solution above?

Or to follow the advice you have provided by adding macvlan child interface (often refers to as shim)? Is this creating a second virtual interface on the hist which can be used to route and communicate with the containers on the network 192.168.30.0/24. When I run the docker network create command, it fails with
Error response from daemon: failed to allocate gateway (192.168.30.1): Address already in use
I presume this is because eth0.300 in the docker-compose is already using that gateway?

One reflection, to make it work, dont I just need to take step 4 shown below which is to create the route
4. Add a route to the macvlan ip-range using the shim interface

Your feedback would be really welcomed along with the best way of making the necessary changes permanent by using docker compose and making the route permanent presumably with networkmanager? Please could you check my overall config and advise any changes I should make to compare-compoose.yml and when what interface and routing commands I should make?

NETWORK_CIDR=192.168.30.0/24
IP_RANGE_CIDR=192.168.30.32/27 #outside of DHCP range (33 - 62)
GATEWAY_IP=192.168.30.1
PARENT_INTERFACE_NAME=eth0

docker network create -d macvlan \
  --subnet=${NETWORK_CIDR} \
  --ip-range=${IP_RANGE_CIDR} \
  --gateway=${GATEWAY_IP} \
  --aux-address="${HOSTNAME}=${IP_RANGE_CIDR%/*}" \
  -o parent=${PARENT_INTERFACE_NAME} mymacvlan

ip link add macvlan-shim link ${PARENT_INTERFACE_NAME} type macvlan mode bridge

ip addr add "${IP_RANGE_CIDR%/*}/32" dev macvlan-shim
ip link set macvlan-shim up

ip route add ${IP_RANGE_CIDR} dev macvlan-shim

What it does:

  1. Create the macvlan network, excluding the ip address that will be used for the shim interface.
  2. create a macvlan child interface
  3. Assign an ip fo the macvlan child interface and bringt it up
  4. Add a route to the macvlan ip-range using the shim interface

I wasn’t aware that it already creates a macvlan child interface on the host, so high likely you don’t need to add another interface, the route should be just fine.

Did docker create a network manager configuration for the device, or does it create it every boot on the fly?
If it created the network manager configuration, you could configure the route to the ip-range there.

If not, you could try if it helps to create a crontab that executes the route add command, though, you could suffer from a race condition if the interface is indeed created on the fly, but doesn’t exist yet, when the cronjob is executed. You could make it wait until the interface is available.

Hi Meyay

I’ll create the route. The network is created on the fly within docker-compose so I’ll need some way of creating the route, then removing the route when the network is no longer needed.

I have no idea how this needs to be solved when the macvlan network is created dynamically.

Now that you know that the “problem” is actually an expected behavior, and know what is required to work around the limitation, you can research on how to implement it.

Though, maybe there is an easier solution: you could try if adding a bridge to the host interface, and using the bridge interface as the parent interface for macvlan bypasses the restrictions. It’s just an idea worth trying. Google should find plenty of posts about how to create a persistent bridge interface on your os.

Ok so the simple solution is to create a new local bridge network to the container effectively connecting the container to its default and the new bridge networks so it has an IP address in both networks (dual homed) as follows:

Change docker-compose.yml for the tvheadend service adding in a new bridge network host-comms

    networks:
       lanvpn:
          ipv4_address: "192.168.30.201"
       host-comms:
          ipv4_address: "172.18.0.2"

Then add a host entry for tvh and tvh.lan in /etc/hosts

172.18.0.2      tvh tvh.lan

You can now ping and connect to tvh.lan, tvh or its IP 172.18.0.2 from the host that runs the docker containers

That was actually not what I had in mind. I was referring to a network bridge on os level using the network manager, not on docker level.

I am glad you found a different solution.

I’ve found it actually does not work as my traffic from 192.168.30.201 is leaking out the internet via 192.168.5.1

Back to the ip route option and see if I can put it in en ifup or down event to add the route each time