I’ve got a linux docker host with two NICs eth0 and eth1, each in their own VLANs, with single IPv4 addresses.
The goal is to isolate all in/out traffic for all or specific containers to eth1.
Would it be possible put a docker bridge interface in a network namespace other than the default?
For example eth0 and eth1 are in their own network namespaces (eth0 being in the default namespace), I would want all, or some docker bridges in the the eth1 namespace.
Stumbling down the road of policy based routing, I was able to fix various issues, each bringing to light an issue, or hole in isolation needing another configuration change.
eth1 address responds to incoming traffic through eth0 → create iproute2 table with routes and rules for eth1 with its own gateway
Containers with ports published to the eth1 address never receive incoming traffic, so they never respond → add iproute2 route and rule for container RFC1918 space in the eth1 table and rule set
Containers user eth1 for outgoing traffic by default, but can hit hosts in eth0 VLAN → tried to remake the container network with option “com.docker.network.bridge.enable_ip_masquerade=false” and manually insert an iptables rule to SNAT the containers to the eth1 address. This didn’t prevent access to the eth0 VLAN, and caused some issue requiring the ARP tables to be flushed, otherwise traffic would be halted somewhere in the mess.
The increasingly complex configuration with policy based routing is turning me off that path, towards network namespaces.
I can place eth1 in its own network namespace allowing it to easily have its own routing table and iptables chains, completely isolated from eth0. The only thing I’m struggling to do is place all, or some of the docker bridge interfaces in the non-default network namespace, with eth1.
If this is the wrong place to ask this, where might be the place?
Thanks for any advice!
Red Hat Enterprise Linux Server release 7.9
Docker version 20.10.2, build 2291f61