Exposed ports with net=container:

Hi,

I’m using Docker version 20.10.18, build b40c2f6, on a debian desktop.

I have a container creating a vpn connection, started like this:

docker run -it --name vpn -h vpn --cap-add=NET_ADMIN --device /dev/net/tun vpnclient

This first container gets ip 172.17.0.2

I have then a second container started with this:

docker run -it --net container:vpn --name app

very simple so far, but my app also needs to expose a port 8080.

I tried adding -p 8080:8080 to the second container but it doesn’t work that way.

I noticed, however, that I can browse to the app from the local browser, on http://172.17.0.2:8080

So, how can I make it work from the lan too? iptables forwarding maybe?
Can anyone help?
Thanks.

The published ports need to be on the vpnclient container. The app container is not in control of the network namespace of the vpnclient container, it just attaches itself to it.

Thanks, I tried to start the vpn with

docker run -it --name vpn -h vpn -p 8080:8080 --cap-add=NET_ADMIN --device /dev/net/tun vpnclient

but now I get these replies:

curl -m 3 -I 'http://172.17.0.2:8080' -> OK
curl -m 3 -I 'http://127.0.0.1:8080' -> OK
curl -m 3 -I 'http://192.168.0.4:8080' -> TIMEOUT

The port is definitely listening though

ss -nlapt | grep 8080 | grep LISTEN
LISTEN     0      4096                 0.0.0.0:8080                  0.0.0.0:*     users:(("docker-proxy",pid=114523,fd=4))
LISTEN     0      4096                    [::]:8080                     [::]:*     users:(("docker-proxy",pid=114531,fd=4))                                                                        

Hmm, doesn’t make sense to me. Are you sure there is no firewall configured that prevents the traffic?

The way you start the vpn container is correct. You should be able to access the container port 8080 through the host port 8080, regardless in which of both containers the actual process is running.

I found the problem!

Yes, the port needs to be open on the vpn container, but there’s also another change to make.

The vpn inside the container sees 172.17.0.0/16 as the local lan, so 192.168.0.0/24 is pushed over the vpn, and that’s a problem.
After I added route add -net 192.168.0.0/24 dev eth0 in the vpn container everything works.

Thanks

I assume it will depend on the image and/or vpn client software inside the container whether an additional route must be set.

In order to be of value for other uses, we need to know the exact image you used.

Any vpn that pushes a default gw would create the same issue, as the LAN subnet outside the container would not be known to the vpn client.
If we look at the routing table inside the vpn container we see that openvpn creates 2 rules: a new default gw at the top, and an exception for the vpn remote.
That’s why route add -net 192.168.0.0/24 dev eth0 works, it adds another exception to the standard openvpn routing table to avoid sending your local lan outside the container to the vpn remote.

# ip r
0.0.0.0/1 via 10.10.10.1 dev tun0
default via 172.17.0.1 dev eth0
104.29.234.12 via 172.17.0.1 dev eth0
172.17.0.0/16 dev eth0 proto kernel scope link src 172.17.0.2

You can replicate using alpine as vpn client with these commands:

RUN apk --no-cache --no-progress upgrade
RUN apk --no-cache --no-progress add openvpn
CMD /usr/sbin/openvpn --config /config/client.ovpn

any generic openvpn configuration that pushes a default gw would do.

I know plenty of people that use the bubuntux/nordvpn image and the container takes care of this, without them having to manually add routes. This is why I still believe it depends on the image and/or vpn client software inside the container.

Anyway, thank you for sharing your solution and explaining it in detail!