Hi guys,
I’m quite noob to docker so sorry if answer is obvious. I’ve been reading a lot without getting a proper answer so I’m posting here in hope someone can help me understand.
I have a 3 manager (RasPi4B) + 1 worker swarm (RasPi5), all freshly built on Raspberry Bookworm. I’m running the latest docker version. I must say that I already have a lot of stacks working perfectly well. This one is the last one I need to finalize my home infrastructure.
When deploying following stack via portainer, I can fully DNS request each of the nodes individually with their own Host IP. Even the DoT is answering properly. But web interface (18006:80 or 18007:443) are unavailable and, as data are already populated in volumes, I believe 18008:3000 is of no use (only serves for first setup).
I’ve also carefully chosen port numbers in my swarm so they would never already be used by any other service/node/Host OS
Here is the stack :
---
version: "3.8"
services:
agh1:
image: adguard/adguardhome
ports:
- target: 80
published: 18006
protocol: tcp
mode: host
- target: 443
published: 18007
protocol: tcp
mode: host
- target: 3000
published: 18008
protocol: tcp
mode: host
- target: 853
published: 853
protocol: tcp
mode: host
- target: 53
published: 53
protocol: tcp
mode: host
- target: 53
published: 53
protocol: udp
mode: host
deploy:
placement:
constraints: [node.hostname == RasPi1]
volumes:
- agh1-work:/opt/adguardhome/work
- agh1-conf:/opt/adguardhome/conf
- certs:/certs
agh2:
image: adguard/adguardhome
ports:
- target: 80
published: 18006
protocol: tcp
mode: host
- target: 443
published: 18007
protocol: tcp
mode: host
- target: 3000
published: 18008
protocol: tcp
mode: host
- target: 853
published: 853
protocol: tcp
mode: host
- target: 53
published: 53
protocol: tcp
mode: host
- target: 53
published: 53
protocol: udp
mode: host
deploy:
mode: replicated
replicas: 1
placement:
constraints: [node.hostname == RasPi2]
volumes:
- agh2-work:/opt/adguardhome/work
- agh2-conf:/opt/adguardhome/conf
- certs:/certs
agh3:
image: adguard/adguardhome
ports:
- target: 80
published: 18006
protocol: tcp
mode: host
- target: 443
published: 18007
protocol: tcp
mode: host
- target: 3000
published: 18008
protocol: tcp
mode: host
- target: 853
published: 853
protocol: tcp
mode: host
- target: 53
published: 53
protocol: tcp
mode: host
- target: 53
published: 53
protocol: udp
mode: host
deploy:
mode: replicated
replicas: 1
placement:
constraints: [node.hostname == RasPi3]
volumes:
- agh3-work:/opt/adguardhome/work
- agh3-conf:/opt/adguardhome/conf
- certs:/certs
agh4:
image: adguard/adguardhome
ports:
- target: 80
published: 18006
protocol: tcp
mode: host
- target: 443
published: 18007
protocol: tcp
mode: host
- target: 3000
published: 18008
protocol: tcp
mode: host
- target: 853
published: 853
protocol: tcp
mode: host
- target: 53
published: 53
protocol: tcp
mode: host
- target: 53
published: 53
protocol: udp
mode: host
deploy:
mode: replicated
replicas: 1
placement:
constraints: [node.hostname == RasPi4]
volumes:
- agh4-work:/opt/adguardhome/work
- agh4-conf:/opt/adguardhome/conf
- certs:/certs
volumes:
agh1-conf:
driver: local
driver_opts:
type: nfs
o: "addr=192.168.2.1,soft,nolock,noatime,rsize=8192,wsize=8192,tcp,timeo=14,nfsvers=4"
device: ":/volume1/nfs/docker/agh/1/conf"
agh1-work:
driver: local
driver_opts:
type: nfs
o: "addr=192.168.2.1,soft,nolock,noatime,rsize=8192,wsize=8192,tcp,timeo=14,nfsvers=4"
device: ":/volume1/nfs/docker/agh/1/work"
agh2-conf:
driver: local
driver_opts:
type: nfs
o: "addr=192.168.2.1,soft,nolock,noatime,rsize=8192,wsize=8192,tcp,timeo=14,nfsvers=4"
device: ":/volume1/nfs/docker/agh/2/conf"
agh2-work:
driver: local
driver_opts:
type: nfs
o: "addr=192.168.2.1,soft,nolock,noatime,rsize=8192,wsize=8192,tcp,timeo=14,nfsvers=4"
device: ":/volume1/nfs/docker/agh/2/work"
agh3-conf:
driver: local
driver_opts:
type: nfs
o: "addr=192.168.2.1,soft,nolock,noatime,rsize=8192,wsize=8192,tcp,timeo=14,nfsvers=4"
device: ":/volume1/nfs/docker/agh/3/conf"
agh3-work:
driver: local
driver_opts:
type: nfs
o: "addr=192.168.2.1,soft,nolock,noatime,rsize=8192,wsize=8192,tcp,timeo=14,nfsvers=4"
device: ":/volume1/nfs/docker/agh/3/work"
agh4-conf:
driver: local
driver_opts:
type: nfs
o: "addr=192.168.2.1,soft,nolock,noatime,rsize=8192,wsize=8192,tcp,timeo=14,nfsvers=4"
device: ":/volume1/nfs/docker/agh/4/conf"
agh4-work:
driver: local
driver_opts:
type: nfs
o: "addr=192.168.2.1,soft,nolock,noatime,rsize=8192,wsize=8192,tcp,timeo=14,nfsvers=4"
device: ":/volume1/nfs/docker/agh/4/work"
certs:
driver: local
driver_opts:
type: nfs
o: "addr=192.168.2.1,soft,nolock,noatime,rsize=8192,wsize=8192,tcp,timeo=14,nfsvers=4"
device: ":/volume1/nfs/certs"
So it seems that 53 and 853 are properly exposed and 18006, 18007 and 18008 are not properly exposed or at least not working as I’d expect. So how is this possible ? I don’t even know how to troubleshoot this…
Here is the output of
sudo ss -lntp
State Recv-Q Send-Q Local Address:Port Peer Address:Port Process
LISTEN 0 4096 0.0.0.0:18008 0.0.0.0:* users:(("docker-proxy",pid=1161802,fd=4))
LISTEN 0 4096 0.0.0.0:18006 0.0.0.0:* users:(("docker-proxy",pid=1161885,fd=4))
LISTEN 0 4096 0.0.0.0:18007 0.0.0.0:* users:(("docker-proxy",pid=1161853,fd=4))
LISTEN 0 4096 0.0.0.0:18001 0.0.0.0:* users:(("docker-proxy",pid=2740,fd=4))
LISTEN 0 4096 0.0.0.0:18002 0.0.0.0:* users:(("docker-proxy",pid=2703,fd=4))
LISTEN 0 4096 0.0.0.0:18003 0.0.0.0:* users:(("docker-proxy",pid=2573,fd=4))
LISTEN 0 128 0.0.0.0:22000 0.0.0.0:* users:(("sshd",pid=815,fd=3))
LISTEN 0 4096 0.0.0.0:853 0.0.0.0:* users:(("docker-proxy",pid=1161833,fd=4))
LISTEN 0 4096 0.0.0.0:53 0.0.0.0:* users:(("docker-proxy",pid=1161906,fd=4))
LISTEN 0 64 0.0.0.0:43465 0.0.0.0:*
LISTEN 0 4096 *:18012 *:* users:(("dockerd",pid=845,fd=126))
LISTEN 0 4096 *:18013 *:* users:(("dockerd",pid=845,fd=135))
LISTEN 0 4096 *:18014 *:* users:(("dockerd",pid=845,fd=105))
LISTEN 0 4096 *:18015 *:* users:(("dockerd",pid=845,fd=106))
LISTEN 0 4096 [::]:18008 [::]:* users:(("docker-proxy",pid=1161812,fd=4))
LISTEN 1 4096 *:18010 *:* users:(("dockerd",pid=845,fd=61))
LISTEN 0 4096 *:18011 *:* users:(("dockerd",pid=845,fd=103))
LISTEN 0 4096 *:18005 *:* users:(("dockerd",pid=845,fd=32))
LISTEN 0 4096 [::]:18006 [::]:* users:(("docker-proxy",pid=1161893,fd=4))
LISTEN 0 4096 [::]:18007 [::]:* users:(("docker-proxy",pid=1161861,fd=4))
LISTEN 0 4096 *:18000 *:* users:(("dockerd",pid=845,fd=59))
LISTEN 0 4096 [::]:18001 [::]:* users:(("docker-proxy",pid=2748,fd=4))
LISTEN 0 4096 [::]:18002 [::]:* users:(("docker-proxy",pid=2712,fd=4))
LISTEN 0 4096 [::]:18003 [::]:* users:(("docker-proxy",pid=2656,fd=4))
LISTEN 0 4096 *:18017 *:* users:(("dockerd",pid=845,fd=144))
LISTEN 0 4096 *:18018 *:* users:(("dockerd",pid=845,fd=85))
LISTEN 0 128 [::]:22000 [::]:* users:(("sshd",pid=815,fd=4))
LISTEN 0 64 [::]:45977 [::]:*
LISTEN 0 4096 [::]:853 [::]:* users:(("docker-proxy",pid=1161840,fd=4))
LISTEN 0 4096 [::]:53 [::]:* users:(("docker-proxy",pid=1161913,fd=4))
LISTEN 0 4096 *:7946 *:* users:(("dockerd",pid=845,fd=33))
LISTEN 0 4096 *:2377 *:* users:(("dockerd",pid=845,fd=23))
In the end my objective is to have 4 separate adguardhome instances behind a VIP so they back-up each other in case Master is going down. I had it working with 4 containers deployed independantly on each node but it’s a pain to manage, where with the stack, I only need one file to do everything.
Thanks in advance for your help !!