Hi everyone,
I am trying to connect my cloud server and my Pi in my home network via Docker Swarms.
I followed the following steps from the Docker Docs:
(see the section “Use an overlay network for standalone containers”)
The following problem:
Containers cannot communicate with each other, as in the example that the servers can ping each other with ping alpine2.
The network test-net is created on both servers, but the worker(Pi) can only connect to it once. However, with docker network inspect test-net only one node is visible at a time. This is the node of the respective system, i.e. the manager sees the manager container and the worker only sees the worker container. Shouldn’t both be visible here?
The nodes are successfully connected because I can create stacks & containers from the Manager(Cloud) on the Worker(Pi) using services.
Output of docker node ls
<manager-id> * debian Ready Active Leader 26.1.3
<worker-id> raspberrypi Ready Active 26.1.3
Output of docker info on the Pi
Client: Docker Engine - Community
Version: 26.1.3
Context: default
Debug Mode: false
Plugins:
buildx: Docker Buildx (Docker Inc.)
Version: v0.14.0
Path: /usr/libexec/docker/cli-plugins/docker-buildx
compose: Docker Compose (Docker Inc.)
Version: v2.27.0
Path: /usr/libexec/docker/cli-plugins/docker-compose
Server:
Containers: 5
Running: 3
Paused: 0
Stopped: 2
Images: 21
Server Version: 26.1.3
Storage Driver: overlay2
Backing Filesystem: extfs
Supports d_type: true
Using metacopy: false
Native Overlay Diff: true
userxattr: false
Logging Driver: json-file
Cgroup Driver: systemd
Cgroup Version: 2
Plugins:
Volume: local
Network: bridge host ipvlan macvlan null overlay
Log: awslogs fluentd gcplogs gelf journald json-file local splunk syslog
Swarm: active
NodeID: <id>
Is Manager: false
Node Address: 192.168.1.5 <------ maybe this is an issue?
Manager Addresses:
<cloud-ip>:2377
Runtimes: io.containerd.runc.v2 runc
Default Runtime: runc
Init Binary: docker-init
containerd version: 8b3b7ca2e5ce38e8f31a34f35b2b68ceb8470d89
runc version: v1.1.12-0-g51d5e94
init version: de40ad0
Security Options:
seccomp
Profile: builtin
cgroupns
Kernel Version: 6.6.28+rpt-rpi-v8
Operating System: Debian GNU/Linux 12 (bookworm)
OSType: linux
Architecture: aarch64
CPUs: 4
Total Memory: 7.759GiB
Name: raspberrypi
ID: <id>
Docker Root Dir: /var/lib/docker
Debug Mode: false
Experimental: false
Insecure Registries:
127.0.0.0/8
Live Restore Enabled: false
But what confuses me and what is also a question is whether you can change the node address here, since this is the LAN IP address.
What I also did upon researching and came across. I ran this script to check if any kernel features are missing:
https://raw.githubusercontent.com/moby/moby/master/contrib/check-config.sh
Manager Output:
- CONFIG_MEMCG_SWAP: missing
(cgroup swap accounting is currently enabled)
- "zfs":
- /dev/zfs: missing
- zfs command: missing
- zpool command: missing
Worker Output:
- CONFIG_MEMCG_SWAP: missing
(cgroup swap accounting is currently enabled)
- CONFIG_CGROUP_HUGETLB: missing
- CONFIG_SECURITY_SELINUX: missing
- Network Drivers:
- "overlay":
- CONFIG_VXLAN: enabled (as module)
- CONFIG_BRIDGE_VLAN_FILTERING: missing
- "zfs":
- /dev/zfs: missing
- zfs command: missing
- zpool command: missing
Also I checked my ports using nc
$ nc -vz <ip manager> 2377
Connection to <ip manager> 2377 port [tcp/*] succeeded!
$ nc -vz <ip manager> 7946
Connection to <ip manager> 7946 port [tcp/*] succeeded!
$ nc -vz -u <ip manager> 7946
Connection to <ip manager> 7946 port [udp/*] succeeded!
$ nc -vz -u <ip manager> 4789
Connection to <ip manager> 4789 port [udp/*] succeeded!
$ nc -vz <public-ip-worker> 2377
nc: connect to <public-ip-worker> port 2377 (tcp) failed: Connection refused
$ nc -vz <public-ip-worker> 7946
Connection to <public-ip-worker> 7946 port [tcp/*] succeeded!
$ nc -vz -u <public-ip-worker> 7946
Connection to <public-ip-worker> 7946 port [udp/*] succeeded!
$ nc -vz -u <public-ip-worker> 4789
Connection to <public-ip-worker> 4789 port [udp/*] succeeded!
But the port is open, I tested this with traefik/whoami on port 2377.
Does port 2377 have to be accessible on the worker node? I have read that this is only used for the initial connection. Would be great if someone could confirm this for me.
Many thanks in advance
Dominik