Azure cloudstor plugin, share not mounting?

Expected behavior

Using the 17.06.0-ce template, I follow the example from the documentation:

docker service create
–replicas 5
–name ping1
–mount type=volume,volume-driver=cloudstor:azure,source=sharedvol1,destination=/shareddata
alpine ping docker.com

I expect data saved in /shareddata to be stored in the file share in azure storage.

Actual behavior

The File Storage and share is created, but it’s empty after writing a file to /sharedddata when I try to browse it in the portal or from the Azure Storage Explorer. Accessing the container running on a different node also shows no sign of the file.
If I delete the service and the volume, and then recreate it, the data previously stored in the volume is still there, so it must be persisted somewhere.

Additional Information

Error from /var/log/docker.log:

Sep 7 06:16:06 moby root: time=“2017-09-07T06:16:06Z” level=info msg="time=“2017-09-07T06:16:06Z” level=error msg=“could not fetch metadata: cannot read metadata: open /mnt/cloudstor/cloudstor-metadata/sharedvol1: no such file or directory” name=sharedvol1 operation=get " plugin=63a109788df294000a26a6cedeaba191bd3af43ef4298b25140a18296f122fc9

The sharedvol1 in cloudstor-metadata file has the following: “share”:“1413c6540b0e98bbded38d92c63357b9”

This is how it looks in the container.

$ ls /shareddata/ | xargs cat
this is some awesome data stored in the…
cloud

Steps to reproduce the behavior

  1. Deploy from docker ce template
  2. Run example from https://docs.docker.com/docker-for-azure/persistent-data-volumes/#share-the-same-volume-among-tasks

We are tracking an issue in the Docker engine [https://github.com/docker/libnetwork/pull/1910] which may lead to the behavior you describe once in a while. What’s most likely happening is, as you reported, the FileStorage is created and mounted over SMB but this mount point is not getting correctly propagated to the workload containers from the cloudstor plugin container. We have seen similar behavior happening in AWS too: https://github.com/docker/for-aws/issues/94 with 17.06.0. The result is that the workload containers end up writing the data directly on a the local disk of the host under the incorrectly propagated mount point. Stay tuned for a fix in 17.06.2.

Seems to mounting after upgrading to 17.06.2.

1 Like