Docker Community Forums

Share and learn in the Docker community.

Make Docker Swarm use same volumes from Docker-Compose

I have a docker-compose.yml file with some services:

version: '3'

services:
    # PostgreSQL
    db:
        image: postgres:12
        volumes:
            - postgres_data:/var/lib/postgresql/data/

    # MongoDB Server
    mongo:
        image: mongo:4.2
        volumes:
            - mongo_data:/data/db
            - mongo_data:/data/configdb

volumes:
    mongo_data:
    mongo_config:
    postgres_data:

I’ve been working in production starting the services with docker-compose up -d. But now I want to use docker stack deploy --compose-file docker-compose.yml mystack for future scalability.

The problem is that the services are now using new volumes. So I can’t use all the DB data. How could I make Swarm to use the existing docker-compose volumes?

Any kind of help would be really appreciated. Thanks in advance

Why shouldn’t it use new volumes? A Volume declared in a docker-compose.yml, will be registered as {project name}_{volume name} when deployed with docker-compose and as {stack name}_{volume name} when deployed via stack deploy.

You could make the volume declaration during stack deployment to point to the “old” volume. Here is an example on how it would look for mogo_data:

volumes:
  mongo_data:
    external:
      name: {project name}_{mongo_data}

Though, bare in mind that swarm is not responsible to handle any sort of replication for volume data. If you use a multi node setup in swarm, there is no way arround using volumes backed by remote shares accessible by all nodes - use nfs v4 if possible, avoid nfs v3, prefere nfs v4 over cifs.

With the local volume driver, which is the default, a volume (declaration) is local to the node - the information is neither shared nor sychronised between swarm nodes. Even if you use a volume backed by a remote share, the node local volume declaration will be created the first time a container consuming it is started. If you apply changes to the docker-compse declaration of the volume, you can potentialy end up having deviating volume declarations amongst node - you need to delete the declaration on each node to get a clean start…

A volume declaration is immutable, thus chaning values for the volume declaration in the compose.yml will not be applied, unless the existing volume declaration is removed.

If you are not using nfs v4 for remote shares today: start using them!
Stop your currenct docker-compose deployments, move data from volumes or bind-mounts to nfs exports, modify the volume declarations in the docker-compose.yml to point to the remote shares, make sure ownership and permissions are correct for the remoteshare. Start the stack using a different name than your docker-compose project was named to create distinct volume and network names…

1 Like

Thank you so much for your answer and clarifications!
I’ve made an answer in Stack Overflow before reading yours. It worked, but I didn’t use new volumes as there were a lot of data in the old ones and we don’t need data replication between nodes for the moment.

As Kiryl pointed out, the answer were the external volumes but there were several steps to follow I list below:

Inspect the containers to get the volumes they’re using and get their names. This answer indicates how to do that.
Edit the docker-compose.yml file to add the external key with the name of the existing* volumes. The result was as follows:
volumes:
mongo_data:
external:
name: existing name…
mongo_config:
external:
name: existing name…
postgres_data:
external:
name: existing name…
Start with docker-compose up -d to check that all the volumes are OK. Then shutdown with docker-compose down
Deploy the stack with docker stack deploy --compose-file docker-compose.yml mystack