I am running a dedicated host with two public IPv4 addresses assigned to it by my provider. These IP addresses are assigned on
eth0:1. This is what it looks like.
$ ip addr show eth0 2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP group default qlen 1000 link/ether d0:50:99:df:bf:e2 brd ff:ff:ff:ff:ff:ff inet 184.108.40.206/32 scope global dynamic eth0 valid_lft 42570sec preferred_lft 42570sec inet 220.127.116.11/32 brd 18.104.22.168 scope global eth0:1 valid_lft forever preferred_lft forever inet6 fe80::d250:99ff:fedf:bfe2/64 scope link valid_lft forever preferred_lft forever
I have four containers:
What I would like to do is route all traffic from
a* containers through eth0, and all traffic from
b* containers through eth0:1.
The best solution I have come up with so far is to bring the containers up with
-p ip:port:port. The immediate problem with this is that all outbound traffic has the IP address assigned to eth0, even if the traffic is coming from a
b* container. I need both inbound and outbound traffic to share the same IP.
I addressed this problem by creating a bridge network for
b* containers, then creating a policy on the
POSTROUTING chain in iptables to rewrite the to-source for all packets originating from that docker network.
Chain POSTROUTING (policy ACCEPT) target prot opt source destination SNAT all -- 172.19.0.0/16 anywhere to:22.214.171.124
I can live with this solution, but I’d like to find out if there is a better way to do it? I have tried messing with macvlan and ipvlan, but
/32 WAN IP addresses, so I don’t think that will work.
In an ideal world, I would be able to create a network that behaves like
--network=host that binds to
eth0:1, but I don’t think that is possible right now.