Docker Community Forums

Share and learn in the Docker community.

CentOS 7, devicemapper with direct-lvm, but containers are filling up root filesystem

I am running Docker 1.12.5 on CentOS 7.2 . I have converted Docker to use Devicemapper with direct-lvm, since Devicemapper with default loop devices is not good for production:

[root@docker01 ~]# docker info | less
Containers: 35
 Running: 22
 Paused: 0
 Stopped: 13
Images: 48
Server Version: 1.12.5
Storage Driver: devicemapper
 Pool Name: docker-thinpool
 Pool Blocksize: 524.3 kB
 Base Device Size: 10.74 GB
 Backing Filesystem: xfs
 Data file: 
 Metadata file: 
 Data Space Used: 16.51 GB
 Data Space Total: 255 GB
 Data Space Available: 238.5 GB
 Metadata Space Used: 6.316 MB
 Metadata Space Total: 2.68 GB
 Metadata Space Available: 2.674 GB
 Thin Pool Minimum Free Space: 25.5 GB
 Udev Sync Supported: true
 Deferred Removal Enabled: true
 Deferred Deletion Enabled: true
 Deferred Deleted Device Count: 0
 Library Version: 1.02.135-RHEL7 (2016-09-28)
 Data Space Available: 238.5 GB

On one of my Docker hosts, the / volume filled up. When I looked deeper, I noticed that the files for some containers actually reside on / , while other are a shm device and seem to use the Devicemapper direct-lvm device.

For example, this container uses shm:

[root@docker01 ~]# df -h /var/lib/docker/containers/fdad640aabb19b7a6a8712edecb5a4123d8c29736f939e213cf8d09ca7151b7d/shm/
Filesystem      Size  Used Avail Use% Mounted on
shm              64M     0   64M   0% /var/lib/docker/containers/fdad640aabb19b7a6a8712edecb5a4123d8c29736f939e213cf8d09ca7151b7d/shm
[root@docker01 ~]#

However, this container uses the root filesystem:

[root@docker01 ~]# df -h /var/lib/docker/containers/ffce9d013b613534984f191ccddca83ed6ce949812a1584556b519cd9626f52c/shm/
Filesystem                 Size  Used Avail Use% Mounted on
/dev/mapper/vg_01-lv_root  9.8G  9.8G   20K 100% /
[root@docker01 ~]#

Shouldn’t all of these Containers use an actual shm filesystem?

I just experienced this myself. Could it be the log files?

Yes indeed. Turns out that I was mistaken.

The disk space was being consumed by /var/lib/docker/containers/*/json.log . One of those was 10G on my 11G volume.

By default, Docker will use the json-file logging driver, and will allow logs to grow endlessly without any rotation. I changed this to be --max-size=25m --max-file=2, and now everything looks good.