Docker Community Forums

Share and learn in the Docker community.

Route FROM one container THROUGH another container

Goal: Change default routing
FROM: “client container” ==> “host”
TO: “client container” ==> “router container”
such that all off-subnet packets egress “client container” through “router container”.

Topology:

client container (172.18.0.5) => routercontainer (172.18.0.1)

Host is OSX.

Docker network setup:
docker network create --subnet=172.18.0.0/24 --gateway=172.18.0.2 client_net
docker network create --subnet=172.18.2.0/24 mgmt_net

Start the client

docker run --cap-add=NET_ADMIN --net client_net --privileged --ip 172.18.0.5 -it simple_client bash

Start the router

docker run --cap-add=NET_ADMIN --net mgmt_net --privileged --ip 172.18.2.2 -it router bash

Attach the client_net to the router

docker network connect client_net --ip 172.18.0.1 <container_id>

Modify the client routing table:
ip route del default
ip route add default via 172.18.0.1

root@1cba98ef4ab9:/home/simpleclient# route -n
Kernel IP routing table
Destination     Gateway         Genmask         Flags Metric Ref    Use Iface
0.0.0.0         172.18.0.1      0.0.0.0         UG    0      0        0 eth0
172.18.0.0      0.0.0.0         255.255.255.0   U     0      0        0 eth0

Where it appears to be breaking:

tcpdump shows the client container sending packets out correct interface:

22:56:10.334210 02:42:ac:12:00:05 > 02:42:ac:12:00:01, ethertype IPv4 (0x0800), length 98: 172.18.0.5 > 8.8.8.8: ICMP echo request, id 30, seq 80, length 64

However tcpdump on router container does not show reception of packets:

root@b331aca628e0:/home/router# tcpdump -n -i eth1
tcpdump: verbose output suppressed, use -v or -vv for full protocol decode
listening on eth1, link-type EN10MB (Ethernet), capture size 262144 bytes
23:07:17.860832 ARP, Request who-has 172.18.0.1 tell 172.18.0.5, length 28
23:07:17.860893 ARP, Reply 172.18.0.1 is-at 02:42:ac:12:00:01, length 28

I would expect to see the packets (even if the routing container would just drop them since theirs no bridging set up yet)

Client Container ifconfig:
client# ifconfig
eth0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
inet 172.18.0.5 netmask 255.255.255.0 broadcast 172.18.0.255
ether 02:42:ac:12:00:05 txqueuelen 0 (Ethernet)
RX packets 21 bytes 1306 (1.3 KB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 224 bytes 21448 (21.4 KB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0

lo: flags=73<UP,LOOPBACK,RUNNING> mtu 65536
inet 127.0.0.1 netmask 255.0.0.0
loop txqueuelen 1 (Local Loopback)
RX packets 0 bytes 0 (0.0 B)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 0 bytes 0 (0.0 B)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0

Router Container ifconfig:
root@86706ced3bba:/home/router# ifconfig
eth0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
inet 172.18.2.2 netmask 255.255.255.0 broadcast 172.18.2.255
ether 02:42:ac:12:02:02 txqueuelen 0 (Ethernet)
RX packets 18 bytes 1388 (1.3 KB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 0 bytes 0 (0.0 B)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0

eth1: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
inet 172.18.0.1 netmask 255.255.255.0 broadcast 172.18.0.255
ether 02:42:ac:12:00:01 txqueuelen 0 (Ethernet)
RX packets 82 bytes 4204 (4.2 KB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 65 bytes 2954 (2.9 KB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0

eth2: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
inet 172.18.1.1 netmask 255.255.255.0 broadcast 172.18.1.255
ether 02:42:ac:12:01:01 txqueuelen 0 (Ethernet)
RX packets 19 bytes 1418 (1.4 KB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 0 bytes 0 (0.0 B)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0

lo: flags=73<UP,LOOPBACK,RUNNING> mtu 65536
inet 127.0.0.1 netmask 255.0.0.0
loop txqueuelen 1 (Local Loopback)
RX packets 0 bytes 0 (0.0 B)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 0 bytes 0 (0.0 B)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0

Docker network inspect:
mbp$docker network inspect client_net
[
{
“Name”: “client_net”,
“Id”: “b0cf58c9f5c3bc0d700673fb83f9d64659460d605cb211890b19796cc36745f2”,
“Created”: “2019-04-17T21:22:00.307015519Z”,
“Scope”: “local”,
“Driver”: “bridge”,
“EnableIPv6”: false,
“IPAM”: {
“Driver”: “default”,
“Options”: {},
“Config”: [
{
“Subnet”: “172.18.0.0/24”,
“Gateway”: “172.18.0.2”
}
]
},
“Internal”: true,
“Attachable”: false,
“Ingress”: false,
“ConfigFrom”: {
“Network”: “”
},
“ConfigOnly”: false,
“Containers”: {
“1cba98ef4ab923fd4cd65aa8ba6eb35edb906befe8a6f9d995175692c95bf22e”: {
“Name”: “objective_heisenberg”,
“EndpointID”: “1559c619c9d902baa3432037676c07523a11202c1b1e3a2d8fb6174ee135d708”,
“MacAddress”: “02:42:ac:12:00:05”,
“IPv4Address”: “172.18.0.5/24”,
“IPv6Address”: “”
},
“b331aca628e0a8d498283ecffceb6d1373932cb2da2ed604c0a0abfaad991739”: {
“Name”: “lucid_goldwasser”,
“EndpointID”: “6325e1d705fd937fb4f8aaab12a71a7f12a45f18f8fd6402615e43c9ff4875df”,
“MacAddress”: “02:42:ac:12:00:01”,
“IPv4Address”: “172.18.0.1/24”,
“IPv6Address”: “”
}
},
“Options”: {},
“Labels”: {}
}
]

Figured it out. Mostly it was due to the custom network creation. So, to get the above working:

  1. create your networks with icc=true, to allow for container to container communcation. For example:

docker network create --subnet=172.18.0.0/24 --gateway=172.18.0.2 --driver=bridge --opt icc=true client_net
docker network create --subnet=172.18.1.0/24 --gateway=172.18.1.2 --driver=bridge --opt icc=true server_net

  1. No need for br interefaces, just enable routing on the router container. For example:
    sysctl -w net.ipv4.ip_forward=1
1 Like