Iād suggest using the provided āCloudstorā functionality (volume plugin) outlined here: https://docs.docker.com/docker-for-aws/persistent-data-volumes/ instead of attaching your own EBS volumes. This should allow you to create per-task-instance EBS volumes and more.
I donāt think itās currently supported unless you can toggle encryption settings while the volume is attached, but thatās an interesting suggestion for an option.
Thanks @friism. I am trying to get this to work with our BYOS in docker cloud and AWS. I can get the plugin installed but I am having trouble with the EBS parameters for the plugin setting settings. EFS is straight forwardā¦ Has anyone got EBS working?
Thanks @ddebroy that worked. I currently have two issuesā¦ but they might be related to the same issue. 1). I have created 2 cloustor volumes manually externally using the following:
I am deploying this these services across swarm on AWS within the same AZ. Most of the time it works but sometime I get a volume mounting error but I cannot see enough of the error in docker container ps to narrow down the search. When I look into sudo cat /var/log/upstart/docker.log | grep āVolumeDriver.Mount: error mountāā¦ I see
time="2017-07-30T13:26:16.675364503Z" level=error msg="fatal task error" error="VolumeDriver.Mount: error mounting volume: cannot stat /dev/mqueue: stat /dev/mqueue: no such file or directory" module="node/agent/taskmanager" node.id=bqnynv4oegmqo4pwek3iczdew service.id=7b02i8vlirbcyvvildirwoo5p task.id=l6in83pc37tzqar1ohvaamte0
As I mentioned I donāt get this error all of the timeā¦
When I move the redis-slave to another AWS AZ, I get the following error:
time="2017-07-30T13:26:16.675364503Z" level=error msg="fatal task error" error="VolumeDriver.Mount: error mounting volume: cannot stat /dev/mqueue: stat /dev/mqueue: no such file or directory" module="node/agent/taskmanager" node.id=bqnynv4oegmqo4pwek3iczdew service.id=7b02i8vlirbcyvvildirwoo5p task.id=l6in83pc37tzqar1ohvaamte0
I thought that Cloudstor would work across AZ with EBS backed volumes? As per the help docs:
If the swarm task gets rescheduled to a node in a different availability zone, Cloudstor transfers the contents of the backing EBS volume to the destination availability zone using a snapshot, and cleans up the EBS volume in the original availability zone. To minimize the time necessary to create the snapshot to transfer data across availability zones, Cloudstor periodically takes snapshots of EBS volumes to ensure there is never a large number of writes that need to be transferred as part of the final snapshot when transferring the EBS volume across availability zones.
@wiziah any chance you can run docker-diagnose and post the ID here? The /dev/mqueue error you reported seems unrelated to anything around cloudstor or EBS. However I am curious if some container you spun up needed a volume backed on /dev/mqueue and thatās what is throwing the mount errors around /dev/mqueue above.
Cloudstor volumes are indeed designed to work across AZs as documented.
Pretty chuffed really. It works greatā¦ takes about 2 mins for the snapshot in AWS to created and mounted when I terminate the instance which a service is running. In this case Redis.
Originally I thought that the directory /dev/mqueue isnāt created on the hostā¦ so I thought I had to make the folderā¦ but then I realised the snapshot of the volume (when checking the volumes in AWS) hasnāt been createdā¦ hence the folder missing. (I think so). I am going to test again tomorrow morning validate thisā¦)
Happy to run docker-diagnose but how do you run this command? Apologiesā¦
@wiziah glad to hear itās working now. Never mind about docker-diagnose since it is mainly geared towards (and present in) the Docker4AWS CloudFormation based deployments.
I tested again this morning deploying a service backed by cloustor on a new instance.
From looking at the docker.log I saw the following:
INFO[6765] 2017/08/03 02:04:44 DEBUG: Response ec2/DescribeVolumes Details:
plugin=b144f15d61ca2bf8c359a9e71536345beaeb57356825d1b7291d91b4eb7ca7df
INFO[6765] ---[ RESPONSE ]--------------------------------------
plugin=b144f15d61ca2bf8c359a9e71536345beaeb57356825d1b7291d91b4eb7ca7df
INFO[6765] HTTP/1.1 200 OK
plugin=b144f15d61ca2bf8c359a9e71536345beaeb57356825d1b7291d91b4eb7ca7df
INFO[6765] Transfer-Encoding: chunked
plugin=b144f15d61ca2bf8c359a9e71536345beaeb57356825d1b7291d91b4eb7ca7df
INFO[6765] Content-Type: text/xml;charset=UTF-8
plugin=b144f15d61ca2bf8c359a9e71536345beaeb57356825d1b7291d91b4eb7ca7df
INFO[6765] Date: Thu, 03 Aug 2017 02:04:44 GMT
plugin=b144f15d61ca2bf8c359a9e71536345beaeb57356825d1b7291d91b4eb7ca7df
INFO[6765] Server: AmazonEC2
plugin=b144f15d61ca2bf8c359a9e71536345beaeb57356825d1b7291d91b4eb7ca7df
INFO[6765] Vary: Accept-Encoding
plugin=b144f15d61ca2bf8c359a9e71536345beaeb57356825d1b7291d91b4eb7ca7df
INFO[6765]
plugin=b144f15d61ca2bf8c359a9e71536345beaeb57356825d1b7291d91b4eb7ca7df
INFO[6765]
plugin=b144f15d61ca2bf8c359a9e71536345beaeb57356825d1b7291d91b4eb7ca7df
INFO[6765] -----------------------------------------------------
plugin=b144f15d61ca2bf8c359a9e71536345beaeb57356825d1b7291d91b4eb7ca7df
INFO[6765] time="2017-08-03T02:04:44Z" level=error msg="could not evaluate mount points: cannot
stat /dev/mqueue: stat /dev/mqueue: no such file or directory" name=dc-doggies-data-1
operation=mountEBS
plugin=b144f15d61ca2bf8c359a9e71536345beaeb57356825d1b7291d91b4eb7ca7df
INFO[6765] time="2017-08-03T02:04:44Z" level=error msg="error mounting volume: cannot stat
/dev/mqueue: stat /dev/mqueue: no such file or directory" name=dc-doggies-data-1 operation=mount
plugin=b144f15d61ca2bf8c359a9e71536345beaeb57356825d1b7291d91b4eb7ca7df
time="2017-08-03T02:04:44.878046312Z" level=error msg="fatal task error"
error="VolumeDriver.Mount: error mounting volume: cannot stat /dev/mqueue: stat /dev/mqueue: no
such file or directory" module="node/agent/taskmanager" node.id=xhjr6kirrga4qs6tuyvx94p9g
service.id=s4vl2twrmx6a2e62dvrt7sk0h task.id=jss6a4j5k40x2qa2tmhoh6do5
time="2017-08-03T02:04:54.216769218Z" level=error msg="Failed to put log events"
errorCode=InvalidSequenceTokenException logGroupName=dubberconnect-doggies
logStreamName=dubberconenct-doggies-nginx message="The given sequenceToken is invalid. The
next expected sequenceToken is:
49573897624615038312995445582113348617071747996378466290" origError=<nil>
time="2017-08-03T02:04:55.051175341Z" level=warning msg="failed to deactivate service binding for
container dc-doggies_red-master.1.tmov52kjbts6wlh0eept6osv1" error="No such container: dc-
doggies_red-master.1.tm
As you can see the /dev/mqueue folder is missing.
Once I create this folder on each hostā¦ I can re-deploy the stack successfullyā¦
Propose any workarounds? Beside updating our host image to include this file structureā¦?
I think I have located the problem in cloudstor. Thanks for sharing the detailed logs. We will have a fix for this shortly. Your temporary workaround is to make sure /dev/mqueue is present/mounted in the host OS/distro you are using.
Thanks @ddebroy for that. I have included creating that directory as part of our deployment. So I can track the fixā¦ can you link to me the github reference etc?
Iām interested in the original question as well: āHow do I mount an EBS volume on the manager instanceā. I am using cloudstor for my containers, but I need to move some data that sits in a EBS snapshot.
When I ssh into a manager, I am in the manager container, where I canāt find the volume I just attached to the manager ec2 instance in the ec2 console. Is there any way to drop to the host os, or otherwise mount the volume WITHOUT creating another ec2 instance as a nfs share?
Hey, @wiziah Can you help me out a bit in Cloud Formation?
I have been working on Docker Swarm since last 6 months now I have to deploy my docker-compose.yml on AWS using āDocker for AWSā Cloud Formation Stack Template.
I saw this tutorial and followed too but in my project, I am mainly facing issues when I have to bind volume.
Now after getting into the manager what I did was I executed my docker-compose.yml in the same stack which was created early by Docker on AWS template.
Problem:
My docker-compose file is such that I need to bind an external Django Project Code with the services/containers which will be created by the stack
but Cloud Formation functions on EBS(Elastic Block Store) and EFS(Elastic File System) and I donāt know to use them to put my project in them and get them accessed by the stack.