Volume Getting full With Docker for AWS

Expected behavior

The daily cleaning task will avoid this problem to happen

Actual behavior

Filesystem Size Used Available Use% Mounted on
overlay 39.4G 38.9G 0 100% /
tmpfs 3.9G 0 3.9G 0% /dev
tmpfs 3.9G 0 3.9G 0% /sys/fs/cgroup
/dev/xvdb1 39.4G 38.9G 0 100% /var/log
/dev/xvdb1 39.4G 38.9G 0 100% /etc/ssh
tmpfs 3.9G 312.3M 3.6G 8% /etc/passwd
tmpfs 3.9G 312.3M 3.6G 8% /etc/shadow
tmpfs 3.9G 312.3M 3.6G 8% /etc/group
tmpfs 3.9G 312.3M 3.6G 8% /home/docker
/dev/xvdb1 39.4G 38.9G 0 100% /etc/resolv.conf
/dev/xvdb1 39.4G 38.9G 0 100% /etc/hostname
/dev/xvdb1 39.4G 38.9G 0 100% /etc/hosts
shm 64.0M 0 64.0M 0% /dev/shm
tmpfs 797.2M 1.2M 796.0M 0% /var/run/docker.sock
tmpfs 3.9G 312.3M 3.6G 8% /usr/bin/docker
tmpfs 3.9G 0 3.9G 0% /proc/kcore
tmpfs 3.9G 0 3.9G 0% /proc/timer_list
tmpfs 3.9G 0 3.9G 0% /proc/sched_debug
tmpfs 3.9G 0 3.9G 0% /sys/firmware

Additional Information

I tried deleting old images, images of stopped containers, logs… but the volumes associated with /dev/xvdb1 appear always at 100% . I only assigned 40Gb for this cluster but actually the microservices I am running are pretty small, and what is more frustrating is that with the default way to access the nodes, I can’t reach /var/lib/docker

What I am missing here? How I can actually directly see what is using this 40Gb of storage and how I can clean it?

Thank you very much for your help!

From my experience, it could be your logging driver not having a rollover so any logs produced to console (docker service logs) will also be stored onto the boot drive. To verify this, you can SSH to a node, then run an image:
docker run -it -v /:/host/ ubuntu sh
then you can list the devices in /host
and you should see a long list of devices and containers, smth like /var/lib/docker/<CONTAINER ID>/ which will contain a log of variable size.