So I tried using swarm. My use case is following:
I have 3 Raspberrys and my Desktop PC. With my Desktop PC I created a swarm and with the Raspberrys I joined the swarm. I wanted to deploy a container on each Raspberry. Therefore I drained the manager and started the service in global mode, which correctly started a container on each device. On each device was a script generating random data, which should be sent via a http request to the container. I have to send a http request to the container so start the http server within the container. When starting the script the data will be sent to the container correctly. The problem is however, that the Request on 1 Device, whill start the http server on all devices. Starting the http server on another device will tell me that it is already running. So apparently swarm does something that the state of the devices is the same, I guess ?!? And further starting the containers manually on the devices everything works perfectly, but with swarm I get ErrorMessages inside the container, which I cant explain.
It seems like I don’t really understand what swarm does with the containers. Therefore I’d like to ask for advice, if this use case (Containers on different Device with different behavior depending on the http request) is actually feasible with swarm. And if not, some resource where I could read whats happening, so that I can at least explain, why its not possible with swarm.
My advice with swarm is: don’t use swarm. Use Kubernetes or other orchestration. But anyway, if you want to use Swarm, please notice that you’ll need a service, and if you expose a port, it’ll expose the same port to all your nodes. So, you’ll need to be sure that each service expose an unique port, otherwise you’ll have problems.
Also, please be careful with multi-arch swarms: it is a little clumsy to restrict images to specific archs (and even more clumsy to create multi-arch images).
I had multiple problems with Swarm on the past: conectivity problems between nodes were common. Over load, swarm would stop routing between services, or would not open ports on containers. Also, sometimes, services stopped replicating without any reason and without any logging (docker events, for example, returned nothing).
The most problematic of all this is that error messages would not help or were completely absent. We had cases when a node was completely down, but swarm still pushed images to it; we had the cryptic context deadline exceeded for multiple different problems. Docker monitoring didn’t help us - it informed that all nodes were reachable and working when they were not… and we had none of these problems when we started to use Kubernetes.
Most of these problems would just solve with a complete reboot of the affected node. But, considering that swarm doesn’t re-balance services by default…
Ah, and did I mention that healthcheck in swarm works logging into the image and running a command inside the container? So, it’s completely useless when there’s a network failure inside swarm, because it’ll not catch it?
Try using replica mode and just set the number of replicas to the number of nodes in your swarm.
However, looking further it appears you want it to only accept data from the same device node. If you want to guarantee that you would likely have to use host networking with global rather than the default because the default is designed to round robin across the replicas.
Quite impressive that you’re able to run Docker in a Pi though.