How to mount an EBS volume on the Docker Swarm Manager?

Expected behavior

I create an EBS volume and attach it to the manager node with the AWS Console. I expect to see it in /dev/xvdf on the manager.

Actual behavior

The volume does not appear in /dev/xvdf

Additional Information

xvdf is listed in /proc/partitions and /sys/block

Steps to reproduce the behavior

  1. Create swarm using Docker for AWS stable version CloudFormation template
  2. Create EBS volume in AWS Console
  3. Attach it to the manager using AWS Console
  4. Wait for volume to be sucessfully attached
  5. ssh into the manager
  6. ls /dev

I’d suggest using the provided “Cloudstor” functionality (volume plugin) outlined here: instead of attaching your own EBS volumes. This should allow you to create per-task-instance EBS volumes and more.

Is it possible to encrypt volumes with Cloudstor?

I don’t think it’s currently supported unless you can toggle encryption settings while the volume is attached, but that’s an interesting suggestion for an option.

Quick update on this as well: I was mistaken, and currently we only support EFS volume creation via Cloudstor, not EBS quite yet.

Does Cloudstor support EBS yet?


Thanks @friism. I am trying to get this to work with our BYOS in docker cloud and AWS. I can get the plugin installed but I am having trouble with the EBS parameters for the plugin setting settings. EFS is straight forward… Has anyone got EBS working?

Please post more details of what you’re trying to do and what’s not working.

@wiziah you can install cloudstor to use only EBS using the following commandline:

docker plugin install --alias cloudstor:aws --grant-all-permissions docker4x/cloudstor:17.06.0-ce-aws1 CLOUD_PLATFORM=AWS AWS_REGION=us-east-1 AWS_STACK_ID=some_unique_id_for_swarm EFS_SUPPORTED=0 DEBUG=1

Thanks @ddebroy that worked. I currently have two issues… but they might be related to the same issue. 1). I have created 2 cloustor volumes manually externally using the following:

docker volume create \
  -d "cloudstor:aws" \
  --opt ebstype=gp2 \
  --opt size=10 \
  --opt iops=1000 \
  --opt backing=relocatable \

docker volume create \
  -d "cloudstor:aws" \
  --opt ebstype=gp2 \
  --opt size=10 \
  --opt iops=1000 \
  --opt backing=relocatable \

In my docker-compose.yml I am connecting the volumes and applying them the redis services like this:

version: '3'

     external: true
     external: true

    container_name: redis-master
    image: "redis:4.0.0"
      - data-volume-1:/data
      - "6379:6379"
    command: "redis-server"

    container_name: redis-slave
    image: "redis:4.0.0"
    command: "redis-server --slaveof redis-master 6379"
      - data-volume-2:/data

I am deploying this these services across swarm on AWS within the same AZ. Most of the time it works but sometime I get a volume mounting error but I cannot see enough of the error in docker container ps to narrow down the search. When I look into sudo cat /var/log/upstart/docker.log | grep “VolumeDriver.Mount: error mount”… I see

time="2017-07-30T13:26:16.675364503Z" level=error msg="fatal task error" error="VolumeDriver.Mount: error mounting volume: cannot stat /dev/mqueue: stat /dev/mqueue: no such file or directory" module="node/agent/taskmanager"

As I mentioned I don’t get this error all of the time…

  1. When I move the redis-slave to another AWS AZ, I get the following error:

time="2017-07-30T13:26:16.675364503Z" level=error msg="fatal task error" error="VolumeDriver.Mount: error mounting volume: cannot stat /dev/mqueue: stat /dev/mqueue: no such file or directory" module="node/agent/taskmanager"

I thought that Cloudstor would work across AZ with EBS backed volumes? As per the help docs:

If the swarm task gets rescheduled to a node in a different availability zone, Cloudstor transfers the contents of the backing EBS volume to the destination availability zone using a snapshot, and cleans up the EBS volume in the original availability zone. To minimize the time necessary to create the snapshot to transfer data across availability zones, Cloudstor periodically takes snapshots of EBS volumes to ensure there is never a large number of writes that need to be transferred as part of the final snapshot when transferring the EBS volume across availability zones.

If anyone can assist that would be great.

@wiziah any chance you can run docker-diagnose and post the ID here? The /dev/mqueue error you reported seems unrelated to anything around cloudstor or EBS. However I am curious if some container you spun up needed a volume backed on /dev/mqueue and that’s what is throwing the mount errors around /dev/mqueue above.

Cloudstor volumes are indeed designed to work across AZs as documented.

I got everything working.

Pretty chuffed really. It works great… takes about 2 mins for the snapshot in AWS to created and mounted when I terminate the instance which a service is running. In this case Redis.

Originally I thought that the directory /dev/mqueue isn’t created on the host… so I thought I had to make the folder… but then I realised the snapshot of the volume (when checking the volumes in AWS) hasn’t been created… hence the folder missing. (I think so). I am going to test again tomorrow morning validate this…)

Happy to run docker-diagnose but how do you run this command? Apologies…

@wiziah glad to hear it’s working now. Never mind about docker-diagnose since it is mainly geared towards (and present in) the Docker4AWS CloudFormation based deployments.

I tested again this morning deploying a service backed by cloustor on a new instance.
From looking at the docker.log I saw the following:

INFO[6765] 2017/08/03 02:04:44 DEBUG: Response ec2/DescribeVolumes Details:  
INFO[6765] ---[ RESPONSE ]--------------------------------------  
INFO[6765] HTTP/1.1 200 OK                               
INFO[6765] Transfer-Encoding: chunked                    
INFO[6765] Content-Type: text/xml;charset=UTF-8          
INFO[6765] Date: Thu, 03 Aug 2017 02:04:44 GMT           
INFO[6765] Server: AmazonEC2                             
INFO[6765] Vary: Accept-Encoding                         
INFO[6765] -----------------------------------------------------  
INFO[6765] time="2017-08-03T02:04:44Z" level=error msg="could not evaluate mount points: cannot 
stat /dev/mqueue: stat /dev/mqueue: no such file or directory" name=dc-doggies-data-1 
INFO[6765] time="2017-08-03T02:04:44Z" level=error msg="error mounting volume: cannot stat 
/dev/mqueue: stat /dev/mqueue: no such file or directory" name=dc-doggies-data-1 operation=mount   
time="2017-08-03T02:04:44.878046312Z" level=error msg="fatal task error" 
error="VolumeDriver.Mount: error mounting volume: cannot stat /dev/mqueue: stat /dev/mqueue: no 
such file or directory" module="node/agent/taskmanager"
time="2017-08-03T02:04:54.216769218Z" level=error msg="Failed to put log events" 
errorCode=InvalidSequenceTokenException logGroupName=dubberconnect-doggies 
logStreamName=dubberconenct-doggies-nginx message="The given sequenceToken is invalid. The 
next expected sequenceToken is: 
49573897624615038312995445582113348617071747996378466290" origError=<nil>
time="2017-08-03T02:04:55.051175341Z" level=warning msg="failed to deactivate service binding for 
container dc-doggies_red-master.1.tmov52kjbts6wlh0eept6osv1" error="No such container: dc-

As you can see the /dev/mqueue folder is missing.

Once I create this folder on each host… I can re-deploy the stack successfully…

Propose any workarounds? Beside updating our host image to include this file structure…?

I think I have located the problem in cloudstor. Thanks for sharing the detailed logs. We will have a fix for this shortly. Your temporary workaround is to make sure /dev/mqueue is present/mounted in the host OS/distro you are using.

Thanks @ddebroy for that. I have included creating that directory as part of our deployment. So I can track the fix… can you link to me the github reference etc?

1 Like

I’m interested in the original question as well: ‘How do I mount an EBS volume on the manager instance’. I am using cloudstor for my containers, but I need to move some data that sits in a EBS snapshot.
When I ssh into a manager, I am in the manager container, where I can’t find the volume I just attached to the manager ec2 instance in the ec2 console. Is there any way to drop to the host os, or otherwise mount the volume WITHOUT creating another ec2 instance as a nfs share?

Hey, @wiziah Can you help me out a bit in Cloud Formation?

I have been working on Docker Swarm since last 6 months now I have to deploy my docker-compose.yml on AWS using “Docker for AWS” Cloud Formation Stack Template.
I saw this tutorial and followed too but in my project, I am mainly facing issues when I have to bind volume.

I ssh in my manager instance by command:

ssh -i {your-pem-key} docker@{manager-instance-public-ip}

Now after getting into the manager what I did was I executed my docker-compose.yml in the same stack which was created early by Docker on AWS template.


My docker-compose file is such that I need to bind an external Django Project Code with the services/containers which will be created by the stack

but Cloud Formation functions on EBS(Elastic Block Store) and EFS(Elastic File System) and I don’t know to use them to put my project in them and get them accessed by the stack.

@ddebroy Your help also would be great here.