Docker Community Forums

Share and learn in the Docker community.

"Cloudstor" volume plugin missing


(Markvr) #1

I’ve just provisioned a Swarm using the beta channel, and was intending to try the “Cloudstor” as documented at https://docs.docker.com/docker-for-azure/persistent-data-volumes/

However, after logging into the manager, and running: docker plugin ls there are no rows returned.

I’m not really sure where else to go with this, but thought I’d check on here if anyone had any ideas? Otherwise I’ll raise it as a bug with docker.


(Markvr) #2

Tried installing this manually with:
docker plugin install docker4x/cloudstor:azure-v1.13.1-ga-2

But creating a service that uses it, just outputs into docker.log:
Failed to initialize Cloudstor: unsupported cloud platform


(Markvr) #3

Also, I’d be interested to know how this plugin relates to the Microsoft provided one at https://github.com/Azure/azurefile-dockervolumedriver ? I’m assuming that Docker would prefer to use the “Cloudstor” one, but there doesn’t seem to be any documentation (or even a github page) for it, other than the brief notes on the docker-for-azure webpages.


(Mavenugo) #4

@markvr are you trying Docker for Azure Beta ?

Cloudstor is supported only in Beta atm.


(Deep Debroy) #5

@markvr Installing cloudstor manually using docker plugin install requires a couple of parameters that the init service container sets up by calling the appropriate Azure APIs. Can you provide us the logs from the init container: docker logs $(docker ps -a | grep init-azure | cut -d' ' -f1)

Cloudstor does use Azure FileStorage as the backing store but the code and packaging is slightly different from https://github.com/Azure/azurefile-dockervolumedriver. Cloudstor is packaged as a managed plugin [https://docs.docker.com/engine/extend/#/docker-engine-managed-plugin-system] while https://github.com/Azure/azurefile-dockervolumedriver follows the legacy plugin model. Cloudstor also hides various details from the user/admin around CIFS options, backing share etc. so that commands/scripts to create swarm services using Cloudstor backed volumes can work across all cloud platforms supported by Cloudstor without having to change anything.


(Markvr) #6

I thought I had used the beta channel, but I’ve just destroyed and recreated the resource group and it has worked this time, so I’m guessing I must have clicked the wrong button or something the first time…

I’m really keen to move our internal systems to docker on Azure, and so this is really good as it removes another barrier to entry. I was going to use the Microsoft docker volume driver, but that looks tricky to run on the “docker-for-azure” platform.

How does the name of the volume - sharedvol1 - in the example relate to the storage that is created in Azure? At the moment it seems the storage is provided from an Azure storage account with a pretty random name, and the share name is also just a GUID. This makes it quite hard to map backend storage to frontend services - could the share name include the service name in it?

Also is it possible to create storage accounts in advance (e.g. that has a particular replication config), and specify the service to use that?


(Deep Debroy) #7

@markvr I was curious if there is a specific reason you want to know how the storage accounts and file shares are getting mapped to the volumes you are creating. In our initial release, we wanted to make things as simple as possible and not have the user specify share names/storage accounts. We can add these as extra/advanced options in subsequent releases.

Today we use md5 hash of the volume names since that allows you to specify all supported docker service template based syntax as the source volume names (for example, with "."s) and not have to worry about the same also being compliant with Azure’s naming requirements for file storage (which would reject a name containing "."s).

If you are curious about the volume name to Azure Storage Account/File Storage mapping today, you can:

  1. Determine the Storage Account used using the command: docker inspect $(docker ps | grep guide-azure | cut -d' ' -f 1) | grep _INFO_STORAGE_
  2. Determine the File storage used within the above Storage Account by calculating the md5 hash of the volume name specified as source using the command: echo -n sourcevol | md5sum. For example, in case of sharedvol1, you will have a file share named 1413c6540b0e98bbded38d92c63357b9

(Ajhewett) #8

I am also having problems provisioning a swarm cluster with the Cloudstor volume plugin installed. I have used the beta channel url https://portal.azure.com/#create/Microsoft.Template/uri/https%3A%2F%2Fdownload.docker.com%2Fazure%2Fedge%2FDocker.tmpl but when ssh’ing into a manager node no plugins are shown with docker plugin ls.

Which URL did you use?
Which data center? (I am provisioning to westeurope).

Output from docker plugin ls and docker-diagnose:

swarm-manager000000:~$ docker plugin ls
ID                  NAME                DESCRIPTION         ENABLED
swarm-manager000000:~$ docker-diagnose
OK hostname=swarm-manager000000 session=1488451532-KPUjWKpxH7dBx7yJLcWrtHqrLqqzlzdN
OK hostname=swarm-manager000001 session=1488451532-KPUjWKpxH7dBx7yJLcWrtHqrLqqzlzdN
OK hostname=swarm-manager000002 session=1488451532-KPUjWKpxH7dBx7yJLcWrtHqrLqqzlzdN
OK hostname=swarm-worker000000 session=1488451532-KPUjWKpxH7dBx7yJLcWrtHqrLqqzlzdN
OK hostname=swarm-worker000001 session=1488451532-KPUjWKpxH7dBx7yJLcWrtHqrLqqzlzdN
OK hostname=swarm-worker000002 session=1488451532-KPUjWKpxH7dBx7yJLcWrtHqrLqqzlzdN
Done requesting diagnostics.
Your diagnostics session ID is 1488451532-KPUjWKpxH7dBx7yJLcWrtHqrLqqzlzdN
Please provide this session ID to the maintainer debugging your issue.
swarm-manager000000:~$

(Markvr) #9

@ddebroy
I totally agree with wanting to make things as simple as possible for people to get started - that’s exactly why I started investigating docker-for-azure in the first place. Having lots of stuff done automatically behind the scenes makes me slightly nervous when planning how to move into production though. If the automation fails for any reason, it’s hard to know where to start looking. For prod systems, the deployment is only the start, most of the work is in the ongoing maintenance and updating.

This especially applies to data storage, because we need to ensure it is backed up and replicated correctly. Having the data stored somewhere in one of the many storage accounts in the resource group (which all have pretty opaque names !) makes it harder to ensure that.

The code snippets you’ve supplied help a lot - if possible for a future release I’d suggest creating storage accounts with more descriptive names (e.g. the logging account at least has the word “logs” at the end), and trying to add a sanitised version of the volume to the share name. Azure naming restrictions are a real PITA though - why they are so restrictive is beyond me. At least allow hyphens…!

It’s a shame docker4x isn’t opensource, so we can’t dig through the code as a last resort. I guess Docker have their reasons for that, but it’s a shame as it makes it harder to work out what is going on automatically behind the scenes.


(Markvr) #10

I used uksouth and the edge template you linked to. I made a mistake the first time because I need to alter the templates to deploy into an existing VNET (we have a VPN back to our onsite DC) and hadn’t managed to save my diff between the templates correctly. Once I fixed that error it worked OK. If you check /var/log/docker.log there might be something useful in there.


(Deep Debroy) #11

@markvr Thanks for the feedback. As you pointed out, Azure naming restrictions are quite stringent. In our next release, we will provide an ability to specify the share name as an optional parameter during volume creation so that one can easily identify the backing file storage (as long as the Azure naming restrictions are adhered to when issuing the volume/service creation commands).

Identifying the storage account hosting the file shares backing cloudstor volumes needs to happen just once per resource group creation and it will always be the storage account that has the table “swarminfo”. Will try to see if we can report this as an output.


(Yshay) #12

Is it possible to install cloudstor for azure on existing “stable” version swarm?


(Deep Debroy) #13

@ajhewett Thanks for sending the logs. Looks like there was a bug in the way the plugin was tagged and therefore failed to install on the latest edge. If you redeploy, you should be able to see cloudstor installed.


(Deep Debroy) #14

@yshay While that is not quite supported at the moment, you can go ahead and install cloudstor with something like the below on each node of the swarm in the Azure resource group:
docker plugin install --alias cloudstor:azure --grant-all-permissions docker4x/cloudstor:azure-v17.03.0-ce CLOUD_PLATFORM=AZURE AZURE_STORAGE_ACCOUNT_KEY="$SA_KEY" AZURE_STORAGE_ACCOUNT="$SA_NAME"

where SA_NAME can be the name of one of the storage accounts already provisioned. You can set this to the output of docker inspect $(docker ps | grep guide-azure | cut -d' ' -f 1) | grep _INFO_STORAGE_

and SA_KEY is the Storage Account Key for the above storage account that you can obtain from the Azure Portal/CLI.


(Yshay) #15

@ddebroy, sounds simple enough, I’ll try it.
Currently, we have an autoscale policy assigned to the worker vmss so new instances can come and go.
Is it possible to execute the plugin installation via swarm-exec so new machines will install the addon automatically?
Thanks for the quick response.


(Deep Debroy) #16

@yshay You should be able to use swarm-exec to invoke the plugin installation command for cloudstor discussed above on all the swarm nodes.

I am curious what policies you have set for auto-scaling and if you are using Azure’s VMSS autoscaling capability or a custom solution directly invoking Azure APIs for scaling the workers. Please note that the custom Linux distro, Moby, that we run in the nodes is not integrated with the azure agent to report metrics like CPU/Memory etc. for Azure’s native VMSS autoscaling purposes.


(Yshay) #17

Actually, we did use vmss native autoscaling (we used it with docker4zure image 1.13.1-2) and it seems to work properly against load testing and few days with low production traffic.
We’ve tested it with policy of adding some instances when cpu>70% in 5 min time window and removing instances when cpu<30%.

We’ve also integrate Microsoft OMS containers solution in hope to use more fine-grained auto-scaling solution in the future as azure vmss autoscaling solution is limited to specific metrics and time windows and obviously don’t provide solution for container auto-scaling (we’re currently just setting service mode to global)


(Maartenvanveen) #18

Is there a feature set that I can consult about the (im)possibilities of the cloudstor plugin?

I noticed that one of the downsides is that I can’t set rights nor change ownership on a cloudstor volume from within a container.

I suspect this is due to the CIFS backend that is used for Azure file storage but this is merely an assumption and i might just be using a wrong approach altogether.

Can someone acknowledge this or better yet does someone have a solution or workaround.

Thanks


Changing ownership on cloudstor storage
(Deep Debroy) #19

@maartenvanveen This is indeed something not supported at the moment due to CIFS. Let’s continue this discussion @ Changing ownership on cloudstor storage


(Assurehedgedoc) #20

On Docker for Azure 17.06, cloudstor is still not installed on creation. I used this command “docker plugin install --alias cloudstor:azure --grant-all-permissions docker4x/cloudstor:17.06.0-ce-azure1” to install it manually but I get the error “Error response from daemon: dial unix /run/docker/plugins/bb0901721330d296e19a5f33b7efb3d04e493f726f104c3e82c09890b0aba227/cloudstor.sock: connect: no such file or directory”

Any suggestions?