Docker swarm overlay network encryption & verification

Hello all,

I’m trying to setup data-plane encryption on a swarm overlay network. For this I’m using the following setup:

OS: RHEL8.4, firewall disabled, selinux disabled
Docker engine community version: 20.10.11
Two Virtualbox machines server8 & server9, connected via a host-only network 192.168.210.0/24 (192.168.210.3 / 192.168.210.4)
Docker daemons are configured with TLS, using defaults ports

A swarm is created for the two docker hosts:

[root@server8 ~]# docker node ls
ID                            HOSTNAME   STATUS    AVAILABILITY   MANAGER STATUS   ENGINE VERSION
dgn7fv465yse9s34yvblbymwh *   server8    Ready     Active         Leader           20.10.11
z2ssc2cg4hd7k0fdpzxcu53ku     server9    Ready     Active                          20.10.11

I’m deploying a simple apache service in the following stack

version: '3.7'
services:
  httpd1:
    image: myhttpd:latest
    deploy:
      replicas: 2
    ports: [ 8080:80 ]
    networks:
      - internal
networks:
  internal:
    driver: overlay
    driver_opts:
      encrypted: ""        # also tried with < encrypted: "true" >

This gives me two running containers, one on each of the docker hosts:

[root@server8 ~]# docker stack deploy -c httpd.yml stack-httpd
Creating network stack-httpd_internal
Creating service stack-httpd_httpd1

[root@server8 ~]# docker service ps stack-httpd_httpd1
ID             NAME                   IMAGE             NODE      DESIRED STATE   CURRENT STATE           ERROR     PORTS
biwevggcs6m0   stack-httpd_httpd1.1   myhttpd:latest   server8   Running         Running 8 minutes ago
o3xgut7k7u77   stack-httpd_httpd1.2   myhttpd:latest   server9   Running         Running 8 minutes ago

[root@server8 ~]# docker network inspect stack-httpd_internal --format="{{.Options}}"
map[com.docker.network.driver.overlay.vxlanid_list:4103 encrypted:]

To check if the traffic on the data-plane is really encrypted, I’m using the following command, where enp0s8 is the connected to the Virtualbox host-only network (192.168.210.0/24). The command shows UDP traffic on port 4789 on server9.

[root@server9 ~]# tcpdump -n -v -i enp0s8 udp port 4789

I’m using a curl command to verify if both containers are giving a HTTP reply. Both containers are replying, and when the container on server9 is replying, the tcpdump command (above) always shows unencrypted traffic.

[root@server8 ~]# curl 127.0.0.1:8080/test.php
This is a web page on host 10.0.0.27
[root@server8 ~]# curl 127.0.0.1:8080/test.php
This is a web page on host 10.0.0.28

Output on server9

[root@server9 ~]# tcpdump -n -v -i enp0s8 udp port 4789
...
15:26:04.566502 IP (tos 0x0, ttl 64, id 28543, offset 0, flags [none], proto UDP (17), length 343)
    192.168.210.4.38712 > 192.168.210.3.vxlan: VXLAN, flags [I] (0x08), vni 4096
IP (tos 0x0, ttl 64, id 57751, offset 0, flags [DF], proto TCP (6), length 293)
    10.0.0.28.webcache > 10.0.0.2.47470: Flags [P.], cksum 0xcedf (correct), seq 1:242, ack 87, win 219, options [nop,nop,TS val 3486129619 ecr 2805845994], length 241: HTTP, length: 241
        HTTP/1.1 200 OK
        Date: Wed, 23 Feb 2022 20:26:04 GMT
        Server: Apache/2.4.6 (CentOS) OpenSSL/1.0.2k-fips PHP/5.4.16
        X-Powered-By: PHP/5.4.16
        Content-Length: 37
        Content-Type: text/html; charset=UTF-8

        This is a web page on host 10.0.0.28
...

I’m expecting to see encrypted traffic on server9 on the docker data-plane port 4789, used for the inter-container traffic that is generated when I’m executing the curl command on server8.

As an extra test I’m now logged into the httpd container running on server9, doing a curl request to the other container that is running on server8:

[root@server9 ~]# docker exec -it 267cffa54f28 /bin/bash

[root@267cffa54f28 /]# ping -c 1 3921c04fc6a0
PING 3921c04fc6a0 (10.0.6.3) 56(84) bytes of data.
64 bytes from stack-httpd_httpd1.1.biwevggcs6m0o6qk5eezja1kd.stack-httpd_internal (10.0.6.3): icmp_seq=1 ttl=64 time=0.462 ms
--- 3921c04fc6a0 ping statistics ---
1 packets transmitted, 1 received, 0% packet loss, time 0ms
rtt min/avg/max/mdev = 0.462/0.462/0.462/0.000 ms

[root@267cffa54f28 /]# curl 3921c04fc6a0:80/test.php
This is a web page on host 10.0.6.3

The following tcpdump shows unencrypted traffic generated during the test above:

[root@server9 ~]# tcpdump -v -n -i enp0s8 udp port 4789
...
15:29:21.057925 IP (tos 0x0, ttl 64, id 14194, offset 0, flags [none], proto UDP (17), length 342)
    192.168.210.3.36407 > 192.168.210.4.vxlan: VXLAN, flags [I] (0x08), vni 4103
IP (tos 0x0, ttl 64, id 31614, offset 0, flags [DF], proto TCP (6), length 292)
    10.0.6.3.http > 10.0.6.4.53388: Flags [P.], cksum 0xc3ab (correct), seq 1:241, ack 85, win 215, options [nop,nop,TS val 480399798 ecr 348426539], length 240: HTTP, length: 240
        HTTP/1.1 200 OK
        Date: Wed, 23 Feb 2022 20:29:21 GMT
        Server: Apache/2.4.6 (CentOS) OpenSSL/1.0.2k-fips PHP/5.4.16
        X-Powered-By: PHP/5.4.16
        Content-Length: 36
        Content-Type: text/html; charset=UTF-8

        This is a web page on host 10.0.6.3
...

During testing i also tried another tcpdump command to show ESP traffic. No output is shown while executing the curl test commands.

[root@server9 ~]# tcpdump -v -i enp0s8 -p esp
dropped privs to tcpdump
tcpdump: listening on enp0s8, link-type EN10MB (Ethernet), capture size 262144 bytes

I’m confused about this, am I not using the correct tcpdump commands to verify the traffic?

Related to iptables vs. nftables
See also issues with swarm encrypted overlay network and nftables · Issue #43382 · moby/moby · GitHub