[SOLVED] Incoming network traffic not forwarding to container

Hello, I’m having an issue with my container not connecting to an external PPTP server using the bridge network. The issue seems to be related to GRE packets not being forwarded to the container.

If I use --net=host, the container successfully connects.

Also, on my laptop (using the same version of docker), the container works with the bridge network.

My host is Debian Stretch.

Client:
 Version:      17.09.1-ce
 API version:  1.32
 Go version:   go1.8.3
 Git commit:   19e2cf6
 Built:        Thu Dec  7 22:24:16 2017
 OS/Arch:      linux/amd64

Server:
 Version:      17.09.1-ce
 API version:  1.32 (minimum version 1.12)
 Go version:   go1.8.3
 Git commit:   19e2cf6
 Built:        Thu Dec  7 22:22:56 2017
 OS/Arch:      linux/amd64
 Experimental: false

When I run the container using the bridge network, the container does have an internet connection. For instance, watching the tcpdump while running “ping -c1 google.com” yields

# tcpdump -i $(ip addr | perl -n -e'/(veth.*)@/ && print $1')
tcpdump: verbose output suppressed, use -v or -vv for full protocol decode
listening on veth33d584d, link-type EN10MB (Ethernet), capture size 262144 bytes
17:35:26.120608 IP 172.17.0.2.58358 > google-public-dns-a.google.com.domain: 43748+ A? google.com. (28)
17:35:26.120628 IP 172.17.0.2.58358 > google-public-dns-a.google.com.domain: 44095+ AAAA? google.com. (28)
17:35:26.142104 IP google-public-dns-a.google.com.domain > 172.17.0.2.58358: 43748 1/0/0 A 216.58.195.78 (44)
17:35:26.145277 IP google-public-dns-a.google.com.domain > 172.17.0.2.58358: 44095 1/0/0 AAAA 2607:f8b0:4005:808::200e (56)
17:35:26.145887 IP 172.17.0.2 > sfo07s16-in-f14.1e100.net: ICMP echo request, id 2816, seq 0, length 64
17:35:26.147185 IP sfo07s16-in-f14.1e100.net > 172.17.0.2: ICMP echo reply, id 2816, seq 0, length 64
17:35:31.277512 ARP, Request who-has 172.17.0.2 tell 172.17.0.1, length 28
17:35:31.277594 ARP, Request who-has 172.17.0.1 tell 172.17.0.2, length 28
17:35:31.277609 ARP, Reply 172.17.0.1 is-at 02:42:ce:84:90:11 (oui Unknown), length 28
17:35:31.277611 ARP, Reply 172.17.0.2 is-at 02:42:ac:11:00:02 (oui Unknown), length 28

You can see that traffic is being routed to and from the container correctly here. However, trying to connect to the PPTP server, I only see one way traffic like this:

# tcpdump -i $(ip addr | perl -n -e'/(veth.*)@/ && print $1')
tcpdump: verbose output suppressed, use -v or -vv for full protocol decode
listening on veth9d23755, link-type EN10MB (Ethernet), capture size 262144 bytes
17:38:52.994815 IP 172.17.0.2 > [EXTERNAL SERVER]: GREv1, call 1377, seq 7, length 36: LCP, Conf-Request (0x01), id 1, length 22
17:38:55.998040 IP 172.17.0.2 > [EXTERNAL SERVER]: GREv1, call 1377, seq 8, length 36: LCP, Conf-Request (0x01), id 1, length 22
17:38:59.001220 IP 172.17.0.2 > [EXTERNAL SERVER]: GREv1, call 1377, seq 9, length 36: LCP, Conf-Request (0x01), id 1, length 22

Watching the host device, though, during the same process shows that the host is receiving responses from the server:

# tcpdump -i eth0 proto GRE
tcpdump: verbose output suppressed, use -v or -vv for full protocol decode
listening on eth0, link-type EN10MB (Ethernet), capture size 262144 bytes
17:42:57.097051 IP [HOST MACHINE] > [EXTERNAL SERVER]: GREv1, call 7561, seq 1, length 36: LCP, Conf-Request (0x01), id 1, length 22 
17:42:57.129178 IP [EXTERNAL SERVER] > [HOST MACHINE]: GREv1, call 63861, seq 0, ack 1, length 72: LCP, Conf-Request (0x01), id 0, length 54 
17:42:57.129257 IP [EXTERNAL SERVER] > [HOST MACHINE]: GREv1, call 63861, seq 1, length 36: LCP, Conf-Ack (0x02), id 1, length 22 
17:42:59.127461 IP [EXTERNAL SERVER] > [HOST MACHINE]: GREv1, call 63861, seq 2, length 68: LCP, Conf-Request (0x01), id 1, length 54 
17:42:59.982903 IP [HOST MACHINE] > [EXTERNAL SERVER]: GREv1, call 7561, seq 2, length 36: LCP, Conf-Request (0x01), id 1, length 22 
17:43:00.014645 IP [EXTERNAL SERVER] > [HOST MACHINE]: GREv1, call 63861, seq 3, ack 2, length 40: LCP, Conf-Ack (0x02), id 1, length 22 
17:43:02.112996 IP [EXTERNAL SERVER] > [HOST MACHINE]: GREv1, call 63861, seq 4, length 68: LCP, Conf-Request (0x01), id 2, length 54 
17:43:02.985988 IP [HOST MACHINE] > [EXTERNAL SERVER]: GREv1, call 7561, seq 3, length 36: LCP, Conf-Request (0x01), id 1, length 22 
17:43:03.017733 IP [EXTERNAL SERVER] > [HOST MACHINE]: GREv1, call 63861, seq 5, ack 3, length 40: LCP, Conf-Ack (0x02), id 1, length 22 
17:43:05.989102 IP [HOST MACHINE] > [EXTERNAL SERVER]: GREv1, call 7561, seq 4, length 36: LCP, Conf-Request (0x01), id 1, length 22 
17:43:06.020984 IP [EXTERNAL SERVER] > [HOST MACHINE]: GREv1, call 63861, seq 6, ack 4, length 40: LCP, Conf-Ack (0x02), id 1, length 22 
17:43:06.132897 IP [EXTERNAL SERVER] > [HOST MACHINE]: GREv1, call 63861, seq 7, length 68: LCP, Conf-Request (0x01), id 3, length 54 
17:43:08.992230 IP [HOST MACHINE] > [EXTERNAL SERVER]: GREv1, call 7561, seq 5, length 36: LCP, Conf-Request (0x01), id 1, length 22 
17:43:09.024055 IP [EXTERNAL SERVER] > [HOST MACHINE]: GREv1, call 63861, seq 8, ack 5, length 40: LCP, Conf-Ack (0x02), id 1, length 22 
17:43:10.136056 IP [EXTERNAL SERVER] > [HOST MACHINE]: GREv1, call 63861, seq 9, length 68: LCP, Conf-Request (0x01), id 4, length 54 
17:43:11.995347 IP [HOST MACHINE] > [EXTERNAL SERVER]: GREv1, call 7561, seq 6, length 36: LCP, Conf-Request (0x01), id 1, length 22 
17:43:12.026995 IP [EXTERNAL SERVER] > [HOST MACHINE]: GREv1, call 63861, seq 10, ack 6, length 40: LCP, Conf-Ack (0x02), id 1, length 22 
17:43:14.142478 IP [EXTERNAL SERVER] > [HOST MACHINE]: GREv1, call 63861, seq 11, length 68: LCP, Conf-Request (0x01), id 5, length 54 
17:43:14.998489 IP [HOST MACHINE] > [EXTERNAL SERVER]: GREv1, call 7561, seq 7, length 36: LCP, Conf-Request (0x01), id 1, length 22 
17:43:15.030213 IP [EXTERNAL SERVER] > [HOST MACHINE]: GREv1, call 63861, seq 12, ack 7, length 40: LCP, Conf-Ack (0x02), id 1, length 22 
17:43:18.001597 IP [HOST MACHINE] > [EXTERNAL SERVER]: GREv1, call 7561, seq 8, length 36: LCP, Conf-Request (0x01), id 1, length 22 
17:43:18.034240 IP [EXTERNAL SERVER] > [HOST MACHINE]: GREv1, call 63861, seq 13, ack 8, length 40: LCP, Conf-Ack (0x02), id 1, length 22 
17:43:18.149176 IP [EXTERNAL SERVER] > [HOST MACHINE]: GREv1, call 63861, seq 14, length 68: LCP, Conf-Request (0x01), id 6, length 54

That is, the host machine and external server are communicating, but the GRE packets never seem to make it to the veth device in the container.

Again, if I monitor the veth device on my laptop, using the same version of docker, I see and back and forth with the external server. Both machines also have identical iptables configurations.

Any advice would be greatly appreciated!

Quite possibly you’ll just need to run the following two commands to enable NAT on PPTP on the linux host:

modprobe ip_conntrack_pptp (Enables connection tracking in the firewall for PPTP)
modprobe ip_nat_pptp (Enables NAT for PPTP in the firewall)

On my laptop, I did have to run “modprobe ip_conntrack_pptp” to get the pptp connection to work inside the container.

On the server (where the container is not working), loading this unfortunately had no effect. I forgot to mention this in my original post.

Thanks very much for your reply, though!

EDIT: The relevant loaded modules are:

$ lsmod | grep pptp
nf_nat_pptp            16384  0
nf_nat_proto_gre       16384  1 nf_nat_pptp
nf_conntrack_pptp      16384  1 nf_nat_pptp
nf_conntrack_proto_gre    16384  1 nf_conntrack_pptp
nf_nat                 24576  4 nf_nat_pptp,nf_nat_proto_gre,nf_nat_masquerade_ipv4,nf_nat_ipv4
nf_conntrack          114688  9 nf_nat_pptp,nf_conntrack_ipv4,nf_conntrack_pptp,nf_conntrack_netlink,nf_conntrack_proto_gre,nf_nat_masquerade_ipv4,xt_conntrack,nf_nat_ipv4,nf_nat

It is perplexing that it would behave differently on your laptop vs server. I guess the next thing to look at is the configuration of the bridge network in docker?:

docker network inspect bridge

I wonder if anything in the Options sections are different or maybe a different subnet in the IPAM.Config section that maybe the iptable rules aren’t handling correct (the latter seeming more like a longshot though given the docker engine manages that part of the rules).

Also, are you running your own custom image or using one from Docker Hub or some such? Just looking to see if I can test your configuration locally.

I was definitely really frustrated by the different behaviors. I looked more into the conntrack stuff, because even though I had blindly loaded those modules before, I didn’t really understand what they did, and your post really made me feel like those ultimately were the problem.

I checked dmesg, and I found the explanation for why it worked locally and not on the server.

On the server:

nf_conntrack: default automatic helper assignment has been turned off for security reasons and CT-based firewall rule not found. Use the iptables CT target to attach helpers instead.

On my laptop:

nf_conntrack: automatic helper assignment is deprecated and it will be removed soon. Use the iptables CT target to attach helpers instead.

So the issue is that on the server, since it was setup more recently, it was not using conntrack, automatically even though the module was loaded.

To test if this was the issue I ran (as root)

# echo 1 > /proc/sys/net/netfilter/nf_conntrack_helper

With this change, the container connected!

As the output suggests, this is not an ideal solution because it is insecure, so I will report back with the iptables configuration that works for me in case it will be of use to someone in the future.

Thanks for your help, and for pushing me in the right direction!

1 Like

Great job tracking that down @sltousie!

It seems like the rules should be something along the lines of:

iptables -A OUTPUT -m conntrack -p tcp --dport 1723 --ctstate NEW,ESTABLISHED,RELATED -j ACCEPT
iptables -A INPUT -m conntrack -p tcp --ctstate ESTABLISHED,RELATED --dport 1723 -j ACCEPT 

iptables -A OUTPUT -m conntrack -p 47 --ctstate NEW,ESTABLISHED,RELATED -j ACCEPT
iptables -A INPUT -m conntrack -p 47 --ctstate  ESTABLISHED,RELATED -j ACCEPT

That should satisfy the iptables CT target(s) requirement so that the features of the (insecure) automatic helper assignment doesn’t have to be re-enabled.

Don’t have something to setup a PPTP tunnel against though so please update if you find you needed to make other (or different) tweaks to the iptables rules.

owwww thanks, that’s my problem!

echo 1 > /proc/sys/net/netfilter/nf_conntrack_helper

Alexander Peter

Hi Everyone,

Sorry to reuse thread but I am having similar issue with default bridge Docker0 but in my case even disable firewall on Docker host (lab testing phase) did not help.

Scenario

  1. After attaching to container instance (172.22.0.2, running Alpine Linux) ping to an external server (172.31.2.28) is successful.

  2. But ping from same external server to the same container instance fails.

  3. tcpdump shows the ICMP request not answer by Docker host
    listening on ens192, link-type EN10MB (Ethernet), capture size 262144 bytes
    09:35:54.799635 IP 172.31.2.28 > 172.22.0.2: ICMP echo request, id 50322, seq 32, length 64
    09:35:55.823619 IP 172.31.2.28 > 172.22.0.2: ICMP echo request, id 50322, seq 33, length 64
    09:35:56.847607 IP 172.31.2.28 > 172.22.0.2: ICMP echo request, id 50322, seq 34, length 64
    09:35:57.871581 IP 172.31.2.28 > 172.22.0.2: ICMP echo request, id 50322, seq 35, length 64

  4. I have confirm IP Forwarding is enable on Docker Host and iptables is not running and no iptables or awall packages include in container image.
    / # apk list
    WARNING: Ignoring APKINDEX.2c4ac24e.tar.gz: No such file or directory
    WARNING: Ignoring APKINDEX.40a3604f.tar.gz: No such file or directory
    openipmi-lanserv-2.0.28-r0 x86_64 {openipmi} (LGPL-2.0-or-later and GPL-2.0-or-later or BSD-3-Clause) [installed]
    musl-1.1.24-r9 x86_64 {musl} (MIT) [installed]
    pcre-8.44-r0 x86_64 {pcre} (BSD-3-Clause) [installed]
    zlib-1.2.11-r3 x86_64 {zlib} (Zlib) [installed]
    apk-tools-2.10.5-r1 x86_64 {apk-tools} (GPL-2.0-only) [installed]
    libintl-0.20.2-r0 x86_64 {gettext} (LGPL-2.1-or-later) [installed]
    musl-utils-1.1.24-r9 x86_64 {musl} (MIT BSD GPL2+) [installed]
    libssl1.1-1.1.1g-r0 x86_64 {openssl} (OpenSSL) [installed]
    libmount-2.35.2-r0 x86_64 {util-linux} (GPL-2.0 GPL-2.0-or-later LGPL-2.0-or-later BSD Public-Domain) [installed]
    alpine-baselayout-3.2.0-r7 x86_64 {alpine-baselayout} (GPL-2.0-only) [installed]
    popt-1.16-r7 x86_64 {popt} (custom) [installed]
    alpine-keys-2.2-r0 x86_64 {alpine-keys} (MIT) [installed]
    busybox-1.31.1-r19 x86_64 {busybox} (GPL-2.0-only) [installed]
    scanelf-1.2.6-r0 x86_64 {pax-utils} (GPL-2.0-only) [installed]
    ca-certificates-bundle-20191127-r4 x86_64 {ca-certificates} (MPL-2.0 GPL-2.0-or-later) [installed]
    libc-utils-0.7.2-r3 x86_64 {libc-dev} (BSD-2-Clause AND BSD-3-Clause) [installed]
    libblkid-2.35.2-r0 x86_64 {util-linux} (GPL-2.0 GPL-2.0-or-later LGPL-2.0-or-later BSD Public-Domain) [installed]
    libffi-3.3-r2 x86_64 {libffi} (MIT) [installed]
    glib-2.64.6-r0 x86_64 {glib} (LGPL-2.1-or-later) [installed]
    libtls-standalone-2.9.1-r1 x86_64 {libtls-standalone} (ISC) [installed]
    ssl_client-1.31.1-r19 x86_64 {busybox} (GPL-2.0-only) [installed]
    openipmi-libs-2.0.28-r0 x86_64 {openipmi} (LGPL-2.0-or-later and GPL-2.0-or-later or BSD-3-Clause) [installed]
    libcrypto1.1-1.1.1g-r0 x86_64 {openssl} (OpenSSL) [installed]

Thanks in advance for any insight.

-minh