Backup docker swarm for AWS? /var/lib/docker/swarm missing

Expected behavior

see: https://success.docker.com/article/backup-restore-swarm-manager

Actual behavior

tar: /var/lib/docker/swarm: No such file or directory

Additional Information

the /var/lib/docker/swarm directory does not exist in docker swarm for aws using the latest cloudformation templates

Steps to reproduce the behavior

  1. create a new swarm using the latest cloudformation template: https://console.aws.amazon.com/cloudformation/home#/stacks/new?stackName=Docker&templateURL=https://editions-us-east-1.s3.amazonaws.com/aws/edge/Docker.tmpl
  2. ssh to a manager node
  3. run ls -lha /var/lib
    indent preformatted text by 4 spaces

I was thinking rather than backing up the swarm why not just rely on the redeploy of the stack?

Redeploy of the stack is fine but it wouldn’t create any networks, secrets, configs, that are defined as external resources. My main question is where are these things stored if not under /var/lib/docker/swarm? Why is this different than the normal swarm config? And what is considered best practices to backup the cluster (docker swarm for aws)?

This is specifically for “docker swarm on aws” ONLY.

I was thinking maybe combine docker-compose to create the resources first including the secrets since they can reference local files before you deploy the stack.

Right now the way I have it is I have a shell script that initializes the base networks before I deploy the stacks. But after writing this I think I can probably make it more portable by using docker-compose.

However that still does not address your question on what do you need to backup.

Right… The main point is I should be able to backup the swarm as the documentation states. I’m not sure where else to look or who else to ask? Very frustrating

What you’d be looking for is where Raft data is stored on the managers https://docs.docker.com/engine/swarm/raft/ still don’t know where it is though.

according to all the documentation it’s stored here: /var/lib/docker/swarm - which is missing on aws image (my original question).

Did you check if the mount was overwritten? In my case I had a bind mount in a container but I set the propagation incorrectly and it “layered” a new mount hiding some of my other mounts.

Has nothing to do with mounts. The directory is not in the AMI.

Pat

I might be wrong, but when you ssh into an instance your actually running inside a container called shell. You’re not running a shell in the EC2 instance itself. Take a look at this article.