Route network traffic through a secondary network interface for a subset of containers

I am running a dedicated host with two public IPv4 addresses assigned to it by my provider. These IP addresses are assigned on eth0 and eth0:1. This is what it looks like.

$ ip addr show eth0
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP group default qlen 1000
    link/ether d0:50:99:df:bf:e2 brd ff:ff:ff:ff:ff:ff
    inet 51.208.37.104/32 scope global dynamic eth0
       valid_lft 42570sec preferred_lft 42570sec
    inet 23.169.32.186/32 brd 23.169.32.186 scope global eth0:1
       valid_lft forever preferred_lft forever
    inet6 fe80::d250:99ff:fedf:bfe2/64 scope link
       valid_lft forever preferred_lft forever

I have four containers:

  1. a
  2. a_exporter
  3. b
  4. b_exporter

What I would like to do is route all traffic from a* containers through eth0, and all traffic from b* containers through eth0:1.

The best solution I have come up with so far is to bring the containers up with -p ip:port:port. The immediate problem with this is that all outbound traffic has the IP address assigned to eth0, even if the traffic is coming from a b* container. I need both inbound and outbound traffic to share the same IP.

I addressed this problem by creating a bridge network for b* containers, then creating a policy on the POSTROUTING chain in iptables to rewrite the to-source for all packets originating from that docker network.

Chain POSTROUTING (policy ACCEPT)
target     prot opt source               destination
SNAT       all  --  172.19.0.0/16        anywhere            to:23.169.32.186

I can live with this solution, but I’d like to find out if there is a better way to do it? I have tried messing with macvlan and ipvlan, but eth0 and eth0:1 have /32 WAN IP addresses, so I don’t think that will work.

In an ideal world, I would be able to create a network that behaves like --network=host that binds to eth0:1, but I don’t think that is possible right now.