I have an issue to create a service with image that internally mounts cifs share.
This is how I map the cifs share inside the container:
COPY nas-credentials /path/credentials
RUN mkdir /Data
RUN echo “//nas-ip/share /Data cifs credentials=/path/credentials,dir_mode=0777,file_mode=0777,nounix,sec=ntlmssp,vers=3.0 0 0” >> /etc/fstab
COPY start.sh /usr/local/bin/start.sh
RUN chmod +x /usr/local/bin/start.sh
CMD [“sh”, “-c”, “/usr/local/bin/start.sh”]
This is the start.sh file, which is mounting the drive:
When I build this image and start it as container with the following options, everything is working:
docker run -d --name my-container --privileged my-image
If I try to start the image as service in the docker swarm, it looks like there are not enough permissions to mount the cifs share:
docker service create --replicas 1 --name my-service --cap-add SYS_ADMIN --cap-add DAC_READ_SEARCH my-image
Thank you Meyay,
I have tried with a volume created on the swarm leader, but it looks like the volume was not populated across the workers . In this case , when I create the service, the container that is starting on one of the workers is not mapping the already created one.
In addition this share must be used by bunch of containers (microservices) , that will do some job on files in this folder.
Furthermore, volumes are locally managed, and immutable by design. If the volume declaration in a compose file is changed, the change will not be propagated to the volume configuration on the node. It needs to be manually deleted, and then re-created by docker swarm.
Thank you Meyay,
I have red the thread that you shared and I took care of your advice to try to switch to NFS. Unfortunately I am getting again an issue with mapping the volume.
Here is the docker-compose.yml:
Thank you for reformatting your post. It makes it way easier to read and to see if indentation is applied correctly.
I never saw nfs mounts with credentials. Did you enable kerberos?
I can see three things that could prevent the volume to be used:
nfs-utils is not installed on the nodes
the nfs export on the nas does not cover the range or ip’s of every swarm node (needs to be configured on the nas, can be tested by performing temporary manual mounts on each node)
the volume declaration is not identical on each node (changes in the compose file are not provisioned to existing volume declaration → they are immutable; delete the volume declaration on nodes where they are not correct and let compoe recarete them)
The volume declared from the link I shared is used like that with a Syno NAS, with enabled nfsv4.1 without enabled kerberos.
Thank you for the reply.
I have removed the authentication for the NFS folder and I the result is the same. When I try to mount the the NFS directly on the host with the following command , it is working:
mount -t nfs -o mountvers=4 nasIP:/volume1/folder /local-folder
I have the feeling that the volume is created only on the Leader and it is not propagated on the workers. When I inspect the swarm and nodes , I don`t see any issues.