I use "dokcer stack rm ",but it doesn't work for rm volume when i used nfs4 as volume type volume in swarm

I used nfs4 as volume type in swarm cluster
When i change my nfs ip on the compose file,but i doesn’t work when i docker stack rm the program and docker stack deploy a new。It was the same IP when i docker volume inspect mnt_mnt.
I use docker stack rm and docker volume rm the volume On one of the cluster,i doesn’t work.the IP of volume didn’t change.I must change the name of volume,so the new container can mount the new volume.
Did ‘docker stack rm’ rm volume on the service?
And it maybe has some on ‘docker swarm’ when i use 'docker volume rm 'on one of the machine '.
The version of docker is ‘docker-ce:18.09.5’

This behavior of docker stack rm is neither restricted to nfs4, nor not to version 18.09.5.
I can’t remember it ever beeing any different up to now. You always have to remove declared volumes manualy with docker volume rm.

Depending on the volume plugin, the result of docker volume rm will vary from just de-registering the volume while leaving the source location untouched and de-registering and deleting the source location.

As long as there is no such thing as a volume (reclaim?) policy, the current behavior might add an additional step, but it also allows other volume plugins to not delete the source data at all.

Though, what I agree on is: this appears to be a docu bug! I would expect this sort of information to be easily found on the docker docs…

1 Like

Dear meyay;
Thank for your reply.I agree with you.I think there must have something like cookie in docker swarm,Because the cluster has three manage node,when i run docker volume rm,it can rm the volume from this machine.and the appearance of docker stack rm can’t remove the volume as nfs which defined on the compose file.

I can totaly understand that executing docker volume rm on each and every node feels wrong. Though, the default nfs driver is a local driver. I did forget to mention this earlier. This is why the volume needs to be cleaned up on each node seperatly.

I am using the containerized StorageOS volume plugin, which is officialy not supported with swarm, but still works with some minor flaws. When I delete a volume on any master, it becomes deleted on each node. If this is the type of behavior you want, you might want to dig deeper into volume plugins. I am quite sure there are plugins that at least allow to remove nfs related volumes in the whole cluster.

1 Like

Seems my suggestion to use a different plugin driver is not that simple

Yesterday, I test two volume plugins:
netshare behaves like “driver: local”, “type: nfs” - no rm, no update of the volume declaration
rexray/csi-nfs plugin does not even work

The netshare plugin was runing quite stable, though I couldn’t see any advantage over the build in local driver used with nfs. Rexray/csi-nfs didn’t even work with Docker-CE 19.03.2.

Openstorage looks kind of promissing, it also has an offical image on Dockerhub, though i couldn’t find ANY documentation that gives a hint on how to use it. From what I got from the logs a `docker volume rm’ should delete it on all nodes.

1 Like