Issue to create a service in docker swarm with image that internally mounts cifs share

I have an issue to create a service with image that internally mounts cifs share.

This is how I map the cifs share inside the container:
COPY nas-credentials /path/credentials
RUN mkdir /Data
RUN echo “//nas-ip/share /Data cifs credentials=/path/credentials,dir_mode=0777,file_mode=0777,nounix,sec=ntlmssp,vers=3.0 0 0” >> /etc/fstab
COPY /usr/local/bin/
RUN chmod +x /usr/local/bin/
CMD [“sh”, “-c”, “/usr/local/bin/”]

This is the file, which is mounting the drive:
mount -a

When I build this image and start it as container with the following options, everything is working:
docker run -d --name my-container --privileged my-image

If I try to start the image as service in the docker swarm, it looks like there are not enough permissions to mount the cifs share:
docker service create --replicas 1 --name my-service --cap-add SYS_ADMIN --cap-add DAC_READ_SEARCH my-image

Any help will be appreciated. Thank you!

What’s wrong with using named volumes backed by cifs (apart from that they don’t accept a credentials file)?

No one with long term docker experiences mounts networks shares inside a container, instead they use nfsv4 backed named volumes.

Thank you Meyay,
I have tried with a volume created on the swarm leader, but it looks like the volume was not populated across the workers . In this case , when I create the service, the container that is starting on one of the workers is not mapping the already created one.
In addition this share must be used by bunch of containers (microservices) , that will do some job on files in this folder.

Did you declare your volumes like shown in this post: Store Volumes on a CIFS NAS - #6 by meyay ?

Furthermore, volumes are locally managed, and immutable by design. If the volume declaration in a compose file is changed, the change will not be propagated to the volume configuration on the node. It needs to be manually deleted, and then re-created by docker swarm.

Thank you Meyay,
I have red the thread that you shared and I took care of your advice to try to switch to NFS. Unfortunately I am getting again an issue with mapping the volume.
Here is the docker-compose.yml:

version: '3.7'

    image: image
      - nfs-volume:/Data
      replicas: 1
          - node.role == worker
    driver: local
      type: nfs
      o: addr={SynologyNASIP},nfsvers=4.1,rsize=1048576,wsize=1048576,hard,timeo=600,retrans=2,username=user,password=pass
      device: :/volume1/folder

Then I create a stack with the following command:

docker stack deploy -c docker-compose.yml --with-registry-auth my-stack

The result is that the container cannot power on. If I remove the the volume options from the composer file , everything comes up.

The NFS is enabled for this shared folder on the Synology and the client IP is enabled.

Thank you again for your support.

Can you do me a favor and format the compose file content according: How to format your forum posts. It makes the compose file easier to read.

I apologize for the inconvenience. I edited the previous post.

Thank you for reformatting your post. It makes it way easier to read and to see if indentation is applied correctly.

I never saw nfs mounts with credentials. Did you enable kerberos?

I can see three things that could prevent the volume to be used:

  • nfs-utils is not installed on the nodes
  • the nfs export on the nas does not cover the range or ip’s of every swarm node (needs to be configured on the nas, can be tested by performing temporary manual mounts on each node)
  • the volume declaration is not identical on each node (changes in the compose file are not provisioned to existing volume declaration → they are immutable; delete the volume declaration on nodes where they are not correct and let compoe recarete them)

The volume declared from the link I shared is used like that with a Syno NAS, with enabled nfsv4.1 without enabled kerberos.

Thank you for the reply.
I have removed the authentication for the NFS folder and I the result is the same. When I try to mount the the NFS directly on the host with the following command , it is working:

mount -t nfs -o mountvers=4 nasIP:/volume1/folder /local-folder

I have the feeling that the volume is created only on the Leader and it is not propagated on the workers. When I inspect the swarm and nodes , I don`t see any issues.

It worked!!! Thank you very much for the support!
I have rebooted all the nodes from the swarm cluster and now mapping and container is powering on.
Thank you again!