Manager node rejected in 3 node docker SWARM cluster

I installed a cluster with 3 nodes all on Ubuntu Server 24.04
The cluster is made up of 1 Manager and 2 Workers.
I have installed docker and docker compose on all nodes.
I installed Portainer on the Manager node and so far so good, everything works fine.
However, when I try to install a new container by selecting the Manager "constraints node: [node.role == manager] " the node is rejected.
Why?
If I install the container on a worker node everything works fine.
But if I have already installed Portainer on the Manager node, why is it rejected for the other containers?
I am very inexperienced and I am starting to experiment with great curiosity SWARM which I really like and I wanted to understand where I am wrong.
Thanks in advance to anyone who will help me!

Doesn’t it say why it rejected? Maybe something in system logs? journald?

If I had to guess I would say there were resource constraints and there are not enough resources for the new container. after already having one.

My second guess is that maybe Portainer changes something so nothing else can be run on manager nodes.

Check the output of docker service ps {servicename} --no-trunc to see what actually prevents the service task to successfully run.

Note: volumes and networks are immutable and managed per node. Meaning that if you make changes to them, you need to delete the effected network or volume on each node, so the next swarm stack deployment can re-create it with the new value.

Hi,
and Thanks for the quick response and I apologize if I’m in the wrong section…
Each Ubuntu Server 24.04 server was configured with 2 cpu, 4Gb ram, 100Gb HDD, I hoped that for some small container installation experiments it would be enough…
I don’t know about the other hypothesis about Portainer…I’ll try to investigate a bit to see if there is any particular setting.
For now I have successfully installed 2 containers - Stirling-PDF and CRAFTY 4 and all 2 are on the worker node, sometimes all 2 on the same worker, sometimes 1 for each worker.
If I don’t specify anything in the .YML file when the system tries to install it on the Manager node it is rejected and then switches to a worker and it ends up fine there, it works.
Another thing I noticed is that by specifying for each container its network like for example crafty-network in the left menu of Portainer I see that under Network I have 2 lines of crafty-network 1 on the Manager node and 1 on the worker node, so it seems that the Manager node manages and accepts at least the network settings of the various containers…

Hi meyay,
i tried and the result is:
docker-swarm-manager-1 Shutdown Rejected 9 minutes ago “invalid mount config for type “bind”: bind source path does not exist: /mnt/stirling/trainingData”

But the trainingData folder exists as in worker nodes… i don’t understand…

However, I saw that the same error also came out with the other Crafty 4 container because worker node 1 was rejected and ended up on worker node 2.
I put the shared storage on the 3 nodes with GlusterFS and I noticed that sometimes I have to restart the nodes a few times because it doesn’t “hook” the shared part even if I had done everything right with fstab to ensure that at every restart the shared storage was always available.
Could this be the reason?

The 3 Ubuntu Server are on 3 ProxMox VM’s

Like I thought. A bind is always local to a node. A bind literacy mounts a host folder into a container folder by its Inode.

If you mount a filesystem like GlusterFS in a host folder, and bind that location into the container, it will use the Inode of the original mount. If the mount takes place after the container is started, the container will still use the old inode it knew during container start, and will never know that it was replaced during a remount.

It’s safe to say your problem is not caused by placement constraints, but instead is created by your storage choice.

Ah…I thought it was a good storage solution.
So there is no possibility to use GlusterFS in docker SWARM?
To test if I let all the nodes start well with GlusterFS mounted and then start/restart a container this time should it work on the Manager node?

This has nothing to do with the storage solution.You just need to make sure the storage is available on all nodes the container could potentially start, before the container starts.

Iff there is a docker volume plugin to take care of mounting the glusterfs storage, it would be a safer solution, than mounting your glusterfs volume (or however its called) into each node’s filesystem at the very same location and binding it into the container…

Most people use volumes backed by nfsv4 remote shares. This way every node that runs the container can mount the share and everything just works fine.

There used to be brilliant solutions for Swarm like the px-developer from Portworx. Now where CSI drivers are supported, there could be a comeback for useful drivers.

Docker volume plugins that provide their api endpoint through running a docker container are usually a call for trouble, because docker will start with a huge delay, if registered plugins are not available during start, and containerized plugins can’t start before docker is started.