Docker Community Forums

Share and learn in the Docker community.

/var/lib/docker does not exist on host

Expected behavior

Both docker info and docker volumes tells me that my volumes are stored in /var/lib/docker/…
I guess that directory should be there, with volumes visible under it.

Actual behavior

Yet, /var/lib/docker doesn’t even exist on the host.


OS X 10.11.5

Steps to reproduce the behavior

  1. run docker info
  2. or run docker volume create --name test and docker volume inspect test
        "Name": "test",
        "Driver": "local",
        "Mountpoint": "/var/lib/docker/volumes/test/_data",
        "Labels": {},
        "Scope": "local"

It’s hidden inside the xhyve virtual machine. But you don’t really need to look inside it. If you’re really curious you can use the magic screen command to get a shell in the VM, but mostly it’s all internal Docker details.

If you’re trying to access the volume data, I think the usual way is to launch another container: docker run -v test:/test -it ubuntu:16.04 bash will get a shell with the volume data visible in /test.


OK, great, I see.

An other, related question. If I have VOLUME in Dockerfile or specify simply the target volume in command line as -v /data, and I run the container with --rm, then the volume gets destroyed after running.

But if I specify a name with -v data:/data then it survives --rm. Is that correct?

If it is, then what is the point of VOLUME?

I’d double check this, but that all sounds correct.

It does an automatic -v /data even if you don’t explicitly specify it.

1 Like

Hello, I am experiencing the same head-scratching as @hyperknot: what good is a VOLUME command or its equivalent -v /data if the data will be lost after the container ends? It looks like a totally useless command, because it only works if you explicitly name the volume yourself, and in such case it will work even without the VOLUME command. So what difference does it make?

I respond to myself: I see the --volumes-from parameter can be used when launching a second container, and it will automatically share all volumes from the first one. For example, if mysqlsvr is a container from the mysql image, which contains a VOLUME /var/lib/mysql command, then I can launch:

docker run --rm --volumes-from mysqlsvr ubuntu ls /var/lib/mysql

And it will show the contents of /var/lib/mysql from the mysqlsvr container.


So, to summarize.
All these statements are true for MacOS

– as you create docker containers, you cannot mount the volume directly on your OS’ filesystem

– instead, they’re mounted on an “interim” linux-based VM

– if you need to access your mounted volume, you create a container that is mounted to the volume you need (see the awesome answer above from crespom)

1 Like

Same problem

$ docker inspect v1
        "CreatedAt": "2019-03-16T14:09:17Z",
        "Driver": "local",
        "Labels": {},
        "Mountpoint": "/var/lib/docker/volumes/v1/_data",
        "Name": "v1",
        "Options": {},
        "Scope": "local"
$  ls /var/lib/docker/volumes/v1/_data
ls: /var/lib/docker/volumes/v1/_data: No such file or directory

I can access those file from container, but not host OS


This is ridiiiiic… the whole reason I have a volume is because I want to be able to mount it on the host OS… it should be part of the host filesystem!

If you want to mount a volume, let’s say for a MySQL database, and you want the volume data to be accessible in your Host OS file browser, and you want that data to persist regardless of what the container is doing, then this is what you want.

I may be wrong, but -v /data for example shouldn’t do anything. -v is --volume, and it’s exactly designated to do what you want it to do - mount path for shared files between the container and the host OS. But it requires 2 parameters (not 1, but 2). The first parameter specifying a location or a volume on your host OS, and the second specifying a location/resource inside your container. -v /data won’t do anything.

I use this command for my project, I restart the container all the time and even have to run a separate mysql container for another project once in a while. When ever I run this command again to start up my container for my project, after it’s booted my data shows up in the database just as I expect.

docker run --name=mysql-project \
  --rm -it --detach \
  --publish 3306:3306 \
  -e MYSQL_ROOT_PASSWORD=thepass \
  --volume=/Users/john/Desktop/mysql-data:/bitnami/mysql \

--name=mysql-project \ - Sets the name of the container
--rm -it --detach \ - removes the container when I stop it, interactive mode but also it runs in the background
--publish 3306:3306 \ - The first 3306 is the port on my OS, the second 3306 is the port inside the container. Makes it so I can connect to the DB from my NodeJS project running on my Host OS via localhost:3306
-e MYSQL_ROOT_PASSWORD=thepass \ - The password for the MySQL root user
-e ALLOW_EMPTY_PASSWORD=no \ - Sometimes I need empty password == yes, so this is a placeholder
--volume=/Users/john/Desktop/mysql-data:/bitnami/mysql \ - Mounts the /bitnami/mysql folder from inside the container and connects it to the mysql-data folder on my Desktop. This folder persists as expected and I can access these files as expected
bitnami/mysql:5.7 - The image I choose to run, has JSON support and doesn’t require any other configuration to run.

I also run PHPMyAdmin to interact with the MySQL Database. Use this with the MySQL container to experiment with your data, prove to yourself that it is persisting, and see the files populate the volume on your Host OS when you run it.

docker run --name phpmyadmin \
  --rm -it --detach \
  --link mysql-project:db \
  --publish 8080:80 \

After running both of these containers, you should have MySQL running on port 3306, and PHPMyAdmin running HTTP on port 8080. Login to PHPMyAdmin with the MySQL root user and the password that is specified in the mysql docker container command.

I hope this clears things up for some of you. I’m no expect but this has been working for me no problem.


You saved my LIFE!!!
Thank you so much!!
I signed up here because of you!