Docker Community Forums

Share and learn in the Docker community.

Disk space full after pruning

docker
#1

Hi. I’ve a problem when pruning docker. After building images, I run “docker system prune --volumes -a -f” but it’s not releasing space from “/var/lib/docker/overlay2”. See below please

Before building the image, disk space & /var/lib/docker/overlay2 size:

ubuntu@xxx:~/tmp/app$ df -hv
Filesystem      Size  Used Avail Use% Mounted on
udev            1.9G     0  1.9G   0% /dev
tmpfs           390M  5.4M  384M   2% /run
/dev/nvme0n1p1   68G   20G   49G  29% /
tmpfs           2.0G  8.0K  2.0G   1% /dev/shm
tmpfs           5.0M     0  5.0M   0% /run/lock
tmpfs           2.0G     0  2.0G   0% /sys/fs/cgroup
tmpfs           390M     0  390M   0% /run/user/1000
ubuntu@xxx:~/tmp/app$ sudo du -hs /var/lib/docker/overlay2
8.0K	/var/lib/docker/overlay2

Building the image

ubuntu@xxx:~/tmp/app$ docker build -f ./Dockerfile .
Sending build context to Docker daemon  1.027MB
Step 1/12 : FROM mhart/alpine-node:9 as base
9: Pulling from mhart/alpine-node
ff3a5c916c92: Pull complete 
c77918da3c72: Pull complete 
Digest: sha256:3c3f7e30beb78b26a602f12da483d4fa0132e6d2b625c3c1b752c8a8f0fbd359
Status: Downloaded newer image for mhart/alpine-node:9
 ---> bd69a82c390b
.....
....
Successfully built d56be87e90a4

Sizes after image built:

ubuntu@xxx:~/tmp/app$ df -hv
Filesystem      Size  Used Avail Use% Mounted on
udev            1.9G     0  1.9G   0% /dev
tmpfs           390M  5.4M  384M   2% /run
/dev/nvme0n1p1   68G   21G   48G  30% /
tmpfs           2.0G  8.0K  2.0G   1% /dev/shm
tmpfs           5.0M     0  5.0M   0% /run/lock
tmpfs           2.0G     0  2.0G   0% /sys/fs/cgroup
tmpfs           390M     0  390M   0% /run/user/1000
ubuntu@xxx:~/tmp/app$ sudo du -hs /var/lib/docker/overlay2
3.9G	/var/lib/docker/overlay2
ubuntu@xxx:~/tmp/app$ docker system prune -af --volumes
Deleted Images:
deleted: sha256:ef4973a39ce03d2cc3de36d8394ee221b2c23ed457ffd35f90ebb28093b40881
deleted: sha256:c3a0682422b4f388c501e29b446ed7a0448ac6d9d28a1b20e336d572ef4ec9a8
deleted: sha256:6988f1bf347999f73b7e505df6b0d40267dc58bbdccc820cdfcecdaa1cb2c274
deleted: sha256:50aaadb4b332c8c1fafbe30c20c8d6f44148cae7094e50a75f6113f27041a880
untagged: alpine:3.6
untagged: alpine@sha256:ee0c0e7b6b20b175f5ffb1bbd48b41d94891b0b1074f2721acb008aafdf25417
deleted: sha256:d56be87e90a44c42d8f1c9deb188172056727eb79521a3702e7791dfd5bfa7b6
deleted: sha256:067da84a69e4a9f8aa825c617c06e8132996eef1573b090baa52cff7546b266d
deleted: sha256:72d4f65fefdf8c9f979bfb7bce56b9ba14bb9e1f7ca676e1186066686bb49291
deleted: sha256:037b7c3cb5390cbed80dfa511ed000c7cf3e48c30fb00adadbc64f724cf5523a
deleted: sha256:796fd2c67a7bc4e64ebaf321b2184daa97d7a24c4976b64db6a245aa5b1a3056
deleted: sha256:7ac06e12664b627d75cd9e43ef590c54523f53b2d116135da9227225f0e2e6a8
deleted: sha256:40993237c00a6d392ca366e5eaa27fcf6f17b652a2a65f3afe33c399fff1fb44
deleted: sha256:bafcf3176fe572fb88f86752e174927f46616a7cf97f2e011f6527a5c1dd68a4
deleted: sha256:bbcc764a2c14c13ddbe14aeb98815cd4f40626e19fb2b6d18d7d85cc86b65048
deleted: sha256:c69cad93cc00af6cc39480846d9dfc3300c580253957324872014bbc6c80e263
deleted: sha256:97a19d85898cf5cba6d2e733e2128c0c3b8ae548d89336b9eea065af19eb7159
deleted: sha256:43773d1dba76c4d537b494a8454558a41729b92aa2ad0feb23521c3e58cd0440
deleted: sha256:721384ec99e56bc06202a738722bcb4b8254b9bbd71c43ab7ad0d9e773ced7ac
untagged: mhart/alpine-node:9
untagged: mhart/alpine-node@sha256:3c3f7e30beb78b26a602f12da483d4fa0132e6d2b625c3c1b752c8a8f0fbd359
deleted: sha256:bd69a82c390b85bfa0c4e646b1a932d4a92c75a7f9fae147fdc92a63962130ff

Total reclaimed space: 122.2MB

It’s releasing only 122.2 MB. Sizes after prune:

ubuntu@xxx:~/tmp/app$ df -hv
Filesystem      Size  Used Avail Use% Mounted on
udev            1.9G     0  1.9G   0% /dev
tmpfs           390M  5.4M  384M   2% /run
/dev/nvme0n1p1   68G   20G   48G  30% /
tmpfs           2.0G  8.0K  2.0G   1% /dev/shm
tmpfs           5.0M     0  5.0M   0% /run/lock
tmpfs           2.0G     0  2.0G   0% /sys/fs/cgroup
tmpfs           390M     0  390M   0% /run/user/1000
ubuntu@xxx:~/tmp/app$ sudo du -hs /var/lib/docker/overlay2
3.7G	/var/lib/docker/overlay2

As you can see, there are 0 containers/images:

ubuntu@xxx:~/tmp/app$ docker ps -a
CONTAINER ID        IMAGE               COMMAND             CREATED             STATUS              PORTS               NAMES
ubuntu@xxx:~/tmp/app$ docker images -a
REPOSITORY          TAG                 IMAGE ID            CREATED             SIZE

But the size of “/var/lib/docker/overlay2” has only decreased from 3.9G to 3.7G. If I build more than one image, it’s increses every time. This is the dockerfile I’m building:

FROM mhart/alpine-    node:9 as base
RUN apk add --no-cache make gcc g++ python
WORKDIR /app
COPY package.json /app
RUN npm install --silent

# Only copy over the node pieces we need from the above image
FROM alpine:3.6
COPY --from=base /usr/bin/node /usr/bin/
COPY --from=base /usr/lib/libgcc* /usr/lib/libstdc* /usr/lib/
WORKDIR /app
COPY --from=base /app .
COPY . .
CMD ["node", "server.js"]

Why it’s not cleaning overlay2 folder? how can I handle this? is there a solution? is it a known bug?

#2

I am having the same problem… Thanks for posting

#3

Did you find any solution to this?

(Lbi3) #4

Hi, try to do this:

1 - Stop Docker
2 - Change your Docker root dir
3 - Start Docker
4 - Reexecute the prune

(Tgeliot) #5

Could it be that the Docker process is still holding file handles open, and thus keeping disk space tied up? I would try simply stopping the Docker server process and seeing if the disk space is freed up then.

(Bhaskardocker) #6

I Try this i got little space on this command.

initially this is consumed
root@Development:/var/lib/docker/overlay2# du -sch /var/lib/docker/overlay2/*
52K /var/lib/docker/overlay2/00f692f94c35a737e1a7412c4eb2abc34a23f83a8cd1a594f74e1006f11a4e54
40K /var/lib/docker/overlay2/00f692f94c35a737e1a7412c4eb2abc34a23f83a8cd1a594f74e1006f11a4e54-init
36K /var/lib/docker/overlay2/01e456f6dd3479b4f2092a9632a68457c65269dd88ce9af45d9f925
438M /var/lib/docker/overlay2/0362336ec5eeee7798e242347344e7ca73ca2457fea9235f49e6a07
13M /var/lib/docker/overlay2/fe0d6f3b2595a0d815b38fffa9da83d3fba17c73cffb02f9a8a75f19
25M /var/lib/docker/overlay2/fff1ee5fbb541f97e95f25e8a7eefb6d44d827fafc6da91c4504d42c7
588K /var/lib/docker/overlay2/l
16G total
i just try this
root@Development:/var/lib/docker/overlay2# docker system prune
WARNING! This will remove:
- all stopped containers
- all networks not used by at least one container
- all dangling images
- all dangling build cache
Are you sure you want to continue? [y/N] y
untagged: git.cognidesk.ai:5050/root/dev-cognidesk/slack@sha256:a4451603e629f1fee6ddb27258d9d49ac9abb52e22585d73e113a3d1
deleted: sha256:c1cd22b17f4ba3bfd4a9357f92460fec83712849ebbbfc1e650a7a303e2
deleted: sha256:bc3e60bf2b20cfd694d0982879d7eea8920697255f2b56cac7928f975
deleted: sha256:7aba41a10056b75786f51b94642b97cc15824f5605d4918b192a6a05ce9a

Total reclaimed space: 1.087GB

got Free space
root@Development:/var/lib/docker/overlay2# du -sch /var/lib/docker/overlay2/*
52K /var/lib/docker/overlay2/00f692f94c412c4eb2abc34a23f83a8cd1a594f74e1006f11a4e54
40K /var/lib/docker/overlay2/00f692f94e1a7412c4eb2abc34a23f83a8cd1a594f74e1006f11a4e54-init
36K /var/lib/docker/overlay2/01e456f6dd3ddf4efd94f2092a9632a68457c65269dd88ce9af45d9f925
438M /var/lib/docker/overlay2/0362336e2ce6ee7798e242347344e7ca73ca2457fea9235f49e6a07
80K /var/lib/docker/overlay2/fdb89b39f58dfd99461a46e481d579d7ffaf1e29bad877a6c70fe4900a
13M /var/lib/docker/overlay2/fe0d6f3b25a0d815b38fffa9da83d3fba17c73cffb02f9a8a75f19
25M /var/lib/docker/overlay2/fff1ee5fbb5ef97e95f25e8a7eefb6d44d827fafc6da91c4504d42c7
488K /var/lib/docker/overlay2
14G total