Docker Community Forums

Share and learn in the Docker community.

Why docker-compose down deletes my volume? how to define volume as external?

I use docker in ubuntu 16.04 LTS: Docker version 18.06.1-ce, build e68fc7a

in https://docs.docker.com/compose/reference/down/ it stated that
docker-compose down by default doesn’t remove volumes.

mongo:
image: mongo
restart: always
networks:
- mynetwork
environment:
MONGO_INITDB_ROOT_USERNAME: user
MONGO_INITDB_ROOT_PASSWORD: pass
volumes:
- ./mongodata:/db/data

when I run docker-compose down without -v or --volumes, why my data is lost? does docker delete my volume?

it’s already on this thread: Docker Volume missing after docker-compose down without -v flag too, but doesn’t receive any answer yet.

and how to define the volume as external ?

1 Like

The documentation is absolute unclear, because the “by default” block contradicts with the sentence above and under when it commes to volumes. True is that everything that was created with up gets removed/deleted on down.

services, networks and named volumes declared in a compose.yml are bound to the lifecylce of docker-compose.

You need to create your volume from the cli (docker volume create ....), and declare it in your compose.yml as external.

2 Likes

The documentation is absolute unclear, because the “by default” block contradicts with the sentence above and under when it commes to volumes. True is that everything that was created with up gets removed/deleted on down.

this makes me confused too.

I just about to update the thread. I now understand that I can make the volume external by adding external: true

  ...
  volumes:
    - mongo_data:/db/data
volumes: 
  mongo_data:
    external: true

then I create the volume using docker volume create --name=mongo_data

but when I’m using docker-compose down again, the data seems doesn’t persist. the volume is still there when I list them using docker volume ls but when re-running docker-compose up the data is gone.

but when I just stopping the container docker-compose stop and re-run the service, the data still there.

could it be mongo problem? not docker?
should I add mongo to the title?

2 Likes

we use external named volumes in development, qa and prod a lot. Data inside an external volume is not deleted by docker-compose or docker stack rm {stackname}.

We use bind type as source for our named volumes in development and NFS as the source in our qa and prod environments.Both can be done with the included “local” persistance driver.

1 Like

Okay so apparently I misread the mongo documentation: https://hub.docker.com/_/mongo

The -v /my/own/datadir:/data/db part of the command mounts the /my/own/datadir directory from the underlying host system as /data/db inside the container, where MongoDB by default will write its data files.

it’s /data/db not /db/data like I write before. Therefore, external volume indeed work. Thank you!

1 Like

I’m also facing the same issue, can anyone help please?
my docker-compose.yaml file looks like this: CodePile | Easily Share Piles of Code

version: '3.7'
services:
  app:
    container_name: bsf-quiz-app
    image: sapkotasuren/bsf-quiz
    ports:
      - "443:8443"
    depends_on:
      - postgresqldb
  postgresqldb:
    image: postgres:latest
    ports:
      - "5432:5432"
    volumes:
      - pgdata:/var/lib/postgresql/data
    environment:
      - POSTGRES_PASSWORD=password
      - POSTGRES_USER=postgres
      - POSTGRES_DB=bsfQuiz
      
volumes:
  pgdata:

Does anyone found the solution or have/had the same issue??
I’ve already tried to create the named volume seprately and adding the external value etc in composer file, it doesn’t work for me.

I had the same problem, a colleague helped me, I don’t know how this is solved

@zenaku Could you please ask your colleague to have a look on this? please.

3 Likes