Using A Second NIC Exclusively For Docker Services

OS: Debian Linux 11

I would like to use a separate network interface card’s port to separate the servers normal operations from a docker container’s service. Let’s say the NIC has 2 ports, the default one eth0 that has always been used by the server and eth1 which has never had an ethernet wire plugged into it. I would like to leave the default eth0 to the rest of the computer’s services but exclusively use eth1 for a single docker service.

I setup a routing table already:

ip rule show
0:      from all lookup local
32764:  from all to 192.168.50.2 lookup rt2
32765:  from 192.168.50.2 lookup rt2
32766:  from all lookup main
32767:  from all lookup default
ip route show
default via 192.168.50.1 dev eth1 onlink 
default via 192.168.1.1 dev eth0 proto dhcp metric 100 
169.254.0.0/16 dev eth0 scope link metric 1000 
172.17.0.0/16 dev docker0 proto kernel scope link src 172.17.0.1 linkdown 
172.18.0.0/16 dev br-edef11942564 proto kernel scope link src 172.18.0.1 linkdown 
192.168.1.0/24 dev eth0 proto kernel scope link src 192.168.1.250 metric 100 
192.168.50.0/30 dev eth1 proto kernel scope link src 192.168.50.2

I use docker-compose.yml for my docker containers. I just need to figure out how to tell docker to route all of it’s traffic through eth1 instead of eth0 and make sure the rest of the server doesn’t try to use eth1 for anything.

I’ve read a little bit about docker network gateways but no one has shown any examples using docker-compose.yml and I haven’t been able to make a planned concept. If anyone has any helpful insight I would love to hear it.

Information I’ve gathered:

https://www.thomas-krenn.com/en/wiki/Two_Default_Gateways_on_One_System
https://unix.stackexchange.com/questions/166551/add-network-eth-with-a-separate-gateway
https://forums.docker.com/t/how-to-set-network-in-docker-compose-yml/121500
https://stewartadam.io/blog/2019/04/04/routing-packets-specific-docker-container-through-specific-outgoing-interface
https://forums.docker.com/t/setting-default-gateway-to-a-container/17420
https://serverfault.com/questions/696747/routing-from-docker-containers-using-a-different-physical-network-interface-and

Afaik, by default docker will use the nic that has the default gateway on it. You might want to check if ipvlan or macvlan can help you with what you want to archive. Both allow to specify a specific parent interface.

I am not aware wether bridge or overlay networks allow to configure the parent interface as well - from what I remember they don’t allow that setting, but I might be wrong.

I tried the following but I couldn’t reach the web service from a browser on the host machine nor an outside client. I was using the ip address 192.168.50.2 with the correct port required, which should be running the service. Unless macvlan works differently. I just don’t know anything about it.

version: "3"
services:
  random:
    networks:
      docker_rt2_vlan50:
networks:
  docker_rt2_vlan50:
    driver: macvlan
    driver_opts:
      parent: eth1

When I change it to the following, it still is not accessible from 192.168.50.2 as expected.

version: "2" # Required For IPAM
services:
  random:
    networks:
      docker_rt2_vlan50:
        ipv4_address: 192.168.50.3
networks:
  docker_rt2_vlan50:
    driver: macvlan
    driver_opts:
      parent: eth1
    ipam:
      config:
        - subnet: "192.168.50.0/30"
          gateway: "192.168.50.1"

Does the ipv4_address have to attach to my layer 3 switch? If that’s the case then I won’t be able to use a /30 then? I don’t know what the settings need to be except:

The subnet & gateway values need to match those of the Docker host network interface. Simply put, the subnet and default gateway for your macvlan network should mirror that of your Docker host.

In earlier versions the docker macvlan documentation, it pointed out that a macvlan parent interface is not able to communicate with is child interfaces and vice version. I guess they removed it again, as this is rather a kernel security restriction, and not a restricted cause by docker. I feel it would still be worth a note or info block.

People usualy bypass this limitation by adding another macvlan child interface to the host. Though, if a container needs to access the host, it needs to use the host#s macvlan child interface.

Check Using Docker macvlan networks · The Odd Bit, section “host access” .

I am not sure if a macvlan subnet can have a 30 bit subnet mask, I always used larger subnet cidrs. Have in mind, that you can only create a single macvlan that points to the same gateway ip - choose wisely!

Though, you can configure the full range of your subnet and use ip_range to make the macvlan network only use that particular range

When I was playing around with macvlan, I always used the full subnet cidr, but specified an ip-range cidr within that subnet cidr for the docker network to use. Make sure the ip-range is outside of any dhcp servers range:

networks:
  ..
    .. 
    ipam:
      config:
      - subnet: 192.168.50.0/24
        gateway: 192.168.50.1
        ip_range: 192.168.50.0/30

update: even though I am not sure what the minimum allowed subnet size is, you can set the ip_range cidr to a 32bit mask if wanted.

I was able to achieve success by changing my subnet to a /24 everywhere in my firewall and switch and then just denying those addresses in the dhcp server, essentially turning it into a /30 again. Thankfully this is a local network service and not a production environment.

Apparently my subnet in the docker config had to match my host subnet it received from my dhcp server, otherwise docker kept throwing an error that the address was out of range. So simply telling docker that the subnet was a /24 when it was really a /30 on the host wouldn’t work. I also used a /32 on my ip_range so the container kept the same static ip address.

It is working right now, thank you for your help. I appreciate it.