How can i find out which container is causing my overlay2 folder to grow and crash my VM?

I have a 60gb Ubuntu VM setup for running docker + all my containers. But, I keep running into the problem that the overlay2 folder is growing exponentially and causing my VM to crash.

This is the ncdu -x of my docker folder, only 16gb filled.

I have tried clearing all stale volumes/containers/etc with sudo docker system prune -a --volumes --force but it doesnt free up any space.

How can I monitor during the containers to see which one is causing my VM to crash?

None of the folders on your screenshot belong to the overlay2 folder under the Docker data root. I say it only because you referred to it as “docker folder”, but you probably meant a folder where you store data for docker containers.

If pruning doesn’t free up space, then it is probably not Docker that uses the generated data. Or there is a bug. I would run prune again and check the content of the overlay2 folder before starting anything.

Please, also show the output of docker info and docker version and remove anything from the output you would not want to share like IP addresses or username.

Small addition to @rimelek’s post: please also share the output of docker system df --verbose

Thank you both for your reply. Here is the information requested

docker system df --verbose:

docker info:

docker version:

Looking forward to your reply! (sorry - could only paste 1 image per response)

Are you working with the VM through a console from which you cannot copy and paste the terminal output?

If you don’t, please, use </> button to share codes, terminal outputs, error messages instead of sharing screenshots. That helps others to search for similar issues and us to read it more easily and quote parts of the message so we can help you more quickly. You can find a complete guide in the following post: How to format your forum posts


I am also confused as you wrote you ran docker system prune -a --volumes --force, but you ha a lot of images and other objects. Am I correct when I assume you ran system df after you also ran system prune as you quoted in your first post?

I understand docker system prune -a --volumes --force : it only prunes stopped containers or stale images, right?

But to answer your question - yes I ran system df after I ran docker system prune -a --volumes --force

1 Like

Here is the output of the requested data:

root@docker:~# docker system df --verbose
Images space usage:

REPOSITORY                                   TAG          IMAGE ID       CREATED        SIZE      SHARED SIZE   UNIQUE SIZE   CONTAINERS
lscr.io/linuxserver/qbittorrent              latest       e298771e8097   40 hours ago   198MB     0B            197.7MB       1
ghcr.io/jmbannon/ytdl-sub-gui                latest       20bf6f62ddd1   2 days ago     1.61GB    0B            1.613GB       1
qmcgaw/gluetun                               latest       4c9c35425868   3 days ago     41.2MB    8.322MB       32.85MB       1
ghcr.io/immich-app/immich-server             v2           0a5b78aba22a   11 days ago    1.76GB    0B            1.758GB       1
ghcr.io/immich-app/immich-machine-learning   v2           18669a478fea   11 days ago    1.31GB    74.81MB       1.234GB       1
ghcr.io/paperless-ngx/paperless-ngx          latest       6da50fcfcb9a   12 days ago    1.42GB    74.81MB       1.345GB       1
jellyfin/jellyfin                            latest       bbbddcc69edb   2 weeks ago    1.55GB    78.62MB       1.467GB       1
lscr.io/linuxserver/jackett                  latest       e4f0191b49e3   2 weeks ago    187MB     27.71MB       159.2MB       1
lscr.io/linuxserver/speedtest-tracker        latest       4fd4729e8aec   2 weeks ago    177MB     27.71MB       148.9MB       1
nextcloud                                    production   059a826a193f   2 weeks ago    1.43GB    78.62MB       1.348GB       1
lscr.io/linuxserver/bazarr                   latest       53bb33287e15   2 weeks ago    425MB     27.71MB       397.1MB       1
ghcr.io/analogj/scrutiny                     master-web   db03187948ac   3 weeks ago    142MB     74.81MB       66.94MB       1
lscr.io/linuxserver/calibre-web              latest       b1c608413c25   3 weeks ago    784MB     0B            784.2MB       1
lscr.io/linuxserver/radarr                   latest       acbe000ca62b   3 weeks ago    207MB     27.71MB       179MB         1
postgres                                     18           a38f9f77ff88   3 weeks ago    456MB     78.62MB       377.2MB       2
postgres                                     16-alpine    aab18c983342   3 weeks ago    275MB     8.322MB       267.1MB       1
ghcr.io/linkwarden/linkwarden                latest       bd3d3da57534   3 weeks ago    2.45GB    0B            2.446GB       1
oleduc/docker-obsidian-livesync-couchdb      master       7fc346e821c2   4 weeks ago    387MB     74.81MB       312MB         1
ghcr.io/hotio/sonarr                         latest       c0aead38325f   4 weeks ago    259MB     8.322MB       250.5MB       1
ghcr.io/advplyr/audiobookshelf               latest       d285dca24fd6   4 weeks ago    315MB     8.311MB       306.7MB       1
redis                                        8            466e5b1da2ef   5 weeks ago    137MB     0B            137.2MB       1
redis                                        alpine       5d79a9ce29f8   5 weeks ago    70.6MB    8.322MB       62.26MB       1
reaper99/recipya                             nightly      4f5b5ba9f47b   5 weeks ago    180MB     8.311MB       171.8MB       1
freshrss/freshrss                            latest       b4e76b7cb548   6 weeks ago    233MB     74.81MB       158.3MB       1
portainer/portainer-ce                       latest       e6b0d4bc3234   6 weeks ago    186MB     0B            186.3MB       1
ghcr.io/gethomepage/homepage                 latest       bbe936e53ecd   7 weeks ago    267MB     8.311MB       258.9MB       1
<none>                                       <none>       2bb282b81e3d   7 weeks ago    754MB     74.81MB       679.1MB       1
<none>                                       <none>       8c04e462c120   2 months ago   113MB     74.81MB       37.72MB       1
mariadb                                      10.6         3b28c49cfcd1   3 months ago   305MB     0B            305.4MB       1
deluan/navidrome                             latest       302ec693cb08   3 months ago   262MB     0B            262.5MB       1
ghcr.io/wg-easy/wg-easy                      latest       32ec7e2b1355   5 months ago   175MB     0B            174.7MB       1
owncast/owncast                              latest       722a6650c96c   6 months ago   175MB     0B            174.5MB       1
mariadb                                      10.5         7171297ddfbc   6 months ago   395MB     0B            395MB         1
influxdb                                     2.1-alpine   7dc6fb3996b3   3 years ago    226MB     0B            226MB         1
terrestris/projectsend                       latest       16da4220b349   3 years ago    562MB     0B            562.1MB       1

Containers space usage:

CONTAINER ID   IMAGE                                                            COMMAND                  LOCAL VOLUMES   SIZE      CREATED        STATUS                       NAMES
ec357014a3ae   jellyfin/jellyfin:latest                                         "/jellyfin/jellyfin"     0               406kB     29 hours ago   Up About an hour (healthy)   jellyfin
ad2c2e3e3d47   deluan/navidrome:latest                                          "/app/navidrome"         0               0B        29 hours ago   Up About an hour             navidrome-navidrome-1
8c23ec0d6953   lscr.io/linuxserver/qbittorrent:latest                           "/init"                  0               23.9kB    29 hours ago   Up About an hour             qbittorrent
16572b5a8941   qmcgaw/gluetun:latest                                            "/gluetun-entrypoint"    0               3.65kB    29 hours ago   Up About an hour (healthy)   gluetun
c1af6979e9cd   mariadb:10.5                                                     "docker-entrypoint.s"   0               2B        2 days ago     Up About an hour             projectsend-mysql-1
4f4d124d8183   terrestris/projectsend:latest                                    "docker-php-entrypoi"   0               26.8kB    2 days ago     Up About an hour             projectsend-web-1
dd11c2852962   ghcr.io/jmbannon/ytdl-sub-gui:latest                             "/init"                  0               22.6MB    2 days ago     Up About an hour             ytdl-sub
ca58e8f91ded   lscr.io/linuxserver/radarr:latest                                "/init"                  0               23.4kB    5 days ago     Up About an hour             radarr
258e5f03ab92   ghcr.io/hotio/sonarr:latest                                      "/init"                  0               76MB      6 days ago     Up About an hour             sonarr
7f8129c7b6a0   lscr.io/linuxserver/jackett:latest                               "/init"                  0               120MB     7 days ago     Up About an hour             jackett
bafc365c2b69   ghcr.io/paperless-ngx/paperless-ngx:latest                       "/init"                  0               304MB     9 days ago     Up About an hour (healthy)   paperless-webserver-1
791b55502d82   ghcr.io/linkwarden/linkwarden:latest                             "docker-entrypoint.s"   0               4.93MB    10 days ago    Up About an hour (healthy)   linkwarden-linkwarden-1
159f917aa2a6   postgres:16-alpine                                               "docker-entrypoint.s
"   0               63B       10 days ago    Up About an hour             linkwarden-postgres-1
d660061d7f2f   ghcr.io/immich-app/immich-server:v2                              "tini -- /bin/bash -
"   0               0B        10 days ago    Up About an hour (healthy)   immich_server
3b59cefbbce5   ghcr.io/immich-app/immich-machine-learning:v2                    "tini -- python -m i"   0               1.52GB    10 days ago    Up About an hour (healthy)   immich_machine_learning
68744a19630b   nextcloud:production                                             "/entrypoint.sh apac"   0               962MB     10 days ago    Up About an hour             nextcloud-app-1
730c3d767e5e   mariadb:10.6                                                     "docker-entrypoint.s"   0               97B       10 days ago    Up About an hour             nextcloud-db-1
d06fe212df2c   owncast/owncast:latest                                           "/app/owncast"           0               0B        12 days ago    Up About an hour             owncast-owncast-1
5a0412bb9055   redis:8                                                          "docker-entrypoint.s"   0               0B        2 weeks ago    Up About an hour             paperless-broker-1
6a31f7f851ba   postgres:18                                                      "docker-entrypoint.s"   0               68B       2 weeks ago    Up About an hour             paperless-db-1
b8e29ab4462e   redis:alpine                                                     "docker-entrypoint.s"   1               0B        2 weeks ago    Up About an hour             nextcloud-redis-1
e733f060e0c4   postgres:18                                                      "docker-entrypoint.s
"   0               68B       2 weeks ago    Up About an hour             freshrss-db
46e96c2bbbf5   freshrss/freshrss:latest                                         "./Docker/entrypoint"   0               338kB     2 weeks ago    Up About an hour             freshrss
85442490a4dc   ghcr.io/immich-app/postgres:14-vectorchord0.4.3-pgvectors0.2.0   "/usr/local/bin/immi"   0               533B      2 weeks ago    Up About an hour (healthy)   immich_postgres
d7735e6bc63e   valkey/valkey:8-bookworm                                         "docker-entrypoint.s"   1               0B        2 weeks ago    Up About an hour (healthy)   immich_redis
6ce858b98a4a   lscr.io/linuxserver/calibre-web:latest                           "/init"                  0               1.11GB    2 weeks ago    Up About an hour             calibre-web
777db7b1dfb4   ghcr.io/advplyr/audiobookshelf:latest                            "tini -- node index.
"   0               0B        2 weeks ago    Up About an hour             audiobookshelf-audiobookshelf-1
c6bbb77f836a   lscr.io/linuxserver/speedtest-tracker:latest                     "/init"                  0               2.39MB    2 weeks ago    Up About an hour             speedtest-tracker
6c3a66ad7321   ghcr.io/gethomepage/homepage:latest                              "docker-entrypoint.s
"   0               113kB     2 weeks ago    Up About an hour (healthy)   homepage
ba5f50a39a77   lscr.io/linuxserver/bazarr:latest                                "/init"                  0               28.9MB    2 weeks ago    Up About an hour             bazarr
77776d9c0c7c   oleduc/docker-obsidian-livesync-couchdb:master                   "tini -- /docker-ent"   0               190B      2 weeks ago    Up About an hour             obsidian-livesync
e364474b1b1c   reaper99/recipya:nightly                                         "/app/recipya serve"     0               0B        2 weeks ago    Up About an hour             recipya
a7dd325fe1a7   ghcr.io/wg-easy/wg-easy:latest                                   "docker-entrypoint.s"   0               0B        2 weeks ago    Up About an hour (healthy)   wg-easy
8d5405b4912d   ghcr.io/analogj/scrutiny:master-web                              "/opt/scrutiny/bin/s
"   0               0B        2 weeks ago    Up About an hour             scrutiny
2af730072be4   influxdb:2.1-alpine                                              "/entrypoint.sh infl"   0               0B        2 weeks ago    Up About an hour             influxdb
e1e6a4ddfb10   portainer/portainer-ce:latest                                    "/portainer"             0               0B        5 weeks ago    Up About an hour             portainer

Local Volumes space usage:

VOLUME NAME                                                        LINKS     SIZE
194a5d16eb314ffd4ca41ef190cfb18af81826ddc869c781800fe04c85808b1d   1         1.444MB
d49ade410e62905090f229652e2118ead11dd940c4ed9554e1b6db4f1ff66d2c   1         88B

Build cache usage: 0B

CACHE ID   CACHE TYPE   SIZE      CREATED   LAST USED   USAGE     SHARED
root@docker:~# 
root@docker:~# docker version
Client: Docker Engine - Community
 Version:           28.5.1
 API version:       1.51
 Go version:        go1.24.8
 Git commit:        e180ab8
 Built:             Wed Oct  8 12:17:24 2025
 OS/Arch:           linux/amd64
 Context:           default

Server: Docker Engine - Community
 Engine:
  Version:          28.5.1
  API version:      1.51 (minimum version 1.24)
  Go version:       go1.24.8
  Git commit:       f8215cc
  Built:            Wed Oct  8 12:17:24 2025
  OS/Arch:          linux/amd64
  Experimental:     false
 containerd:
  Version:          v1.7.28
  GitCommit:        b98a3aace656320842a23f4a392a33f46af97866
 runc:
  Version:          1.3.0
  GitCommit:        v1.3.0-0-g4ca628d1
 docker-init:
  Version:          0.19.0
  GitCommit:        de40ad0
root@docker:~# docker info
Client: Docker Engine - Community
 Version:    28.5.1
 Context:    default
 Debug Mode: false
 Plugins:
  buildx: Docker Buildx (Docker Inc.)
    Version:  v0.29.1
    Path:     /usr/libexec/docker/cli-plugins/docker-buildx
  compose: Docker Compose (Docker Inc.)
    Version:  v2.40.2
    Path:     /usr/libexec/docker/cli-plugins/docker-compose

Server:
 Containers: 36
  Running: 36
  Paused: 0
  Stopped: 0
 Images: 35
 Server Version: 28.5.1
 Storage Driver: overlay2
  Backing Filesystem: extfs
  Supports d_type: true
  Using metacopy: false
  Native Overlay Diff: true
  userxattr: false
 Logging Driver: json-file
 Cgroup Driver: systemd
 Cgroup Version: 2
 Plugins:
  Volume: local
  Network: bridge host ipvlan macvlan null overlay
  Log: awslogs fluentd gcplogs gelf journald json-file local splunk syslog
 CDI spec directories:
  /etc/cdi
  /var/run/cdi
 Swarm: inactive
 Runtimes: io.containerd.runc.v2 runc
 Default Runtime: runc
 Init Binary: docker-init
 containerd version: b98a3aace656320842a23f4a392a33f46af97866
 runc version: v1.3.0-0-g4ca628d1
 init version: de40ad0
 Security Options:
  apparmor
  seccomp
   Profile: builtin
  cgroupns
 Kernel Version: 6.1.0-40-amd64
 Operating System: Debian GNU/Linux 12 (bookworm)
 OSType: linux
 Architecture: x86_64
 CPUs: 6
 Total Memory: 7.563GiB
 Name: docker
 ID: 1704418f-d410-4b5a-b17e-57278be22eb4
 Docker Root Dir: /var/lib/docker
 Debug Mode: false
 Username: xxx
 Experimental: false
 Insecure Registries:
  ::1/128
  127.0.0.0/8
 Live Restore Enabled: false

Observations:

  • images: there are 7 large images with sizes between 1,31GB and 2,45GB
    • this could be a problem, if you regularly update the image to create a new container based on the new image, without removing old images from the local cache. This could happen easily if watchtower is running unattended.
  • containers: it looks like 3 containers write a lot of data into the container filesystem: immich_machine_learning, nextcloud-app-1, and calibre-web. You might want to check using docker diff <container name> where you forgot to add a volume mapping into a container folder to store data outside the container.
    • this could be a problem, if it overrides big files that already exists in the image. Modified and added files will end up in the containers copy-on-write write layer + data will be lost if the container is replaced based on a new image.
  • volumes: only 2 anonymous volumes, which indicates you are using bind mounts instead. Is it it safe to assume your first screenshot showed the disk usage of the host folders you bind mounted as volumes into the containers?

Generally, with such a small base disk, you might want to consider using a dedicated block device for docker’s data-root folder, to make sure that it will not render your system useless, when you manage to fill up the disk space of the dedicated block device.

I am pretty sure I didn’t answer what you are looking for, but then again the original post didn’t really specify something specific.

volumes: only 2 anonymous volumes, which indicates you are using bind mounts instead. Is it it safe to assume your first screenshot showed the disk usage of the host folders you bind mounted as volumes into the containers? → Yes correct, is that bad practice? I thought it was better to do it this way to allow for easy backups of the important folders.

This is the output of docker diff for immich folder, not sure what to make of this?

root@docker:~# docker diff 3b59cefbbce5
C /root
A /root/.cache
A /root/.cache/huggingface
A /root/.cache/huggingface/xet
A /root/.cache/huggingface/xet/https___cas_serv-tGqkUaZf_CBPHQ6h
A /root/.cache/huggingface/xet/https___cas_serv-tGqkUaZf_CBPHQ6h/chunk-cache
A /root/.cache/huggingface/xet/https___cas_serv-tGqkUaZf_CBPHQ6h/chunk-cache/RY
A /root/.cache/huggingface/xet/https___cas_serv-tGqkUaZf_CBPHQ6h/chunk-cache/RY/RYLE2RP6pDDqMoQLjFg9beb9V7OrIyieZfJyjl9JCxBkZWZhdWx0
A /root/.cache/huggingface/xet/https___cas_serv-tGqkUaZf_CBPHQ6h/chunk-cache/RY/RYLE2RP6pDDqMoQLjFg9beb9V7OrIyieZfJyjl9JCxBkZWZhdWx0/owMAAKQDAACWkQAAAAAAAF9wNhI=
A /root/.cache/huggingface/xet/https___cas_serv-tGqkUaZf_CBPHQ6h/chunk-cache/od
A /root/.cache/huggingface/xet/https___cas_serv-tGqkUaZf_CBPHQ6h/chunk-cache/od/od9OsntjHtKFyxdK9o0wHatZJr2q3CHFqEhdBihlpplkZWZhdWx0
A /root/.cache/huggingface/xet/https___cas_serv-tGqkUaZf_CBPHQ6h/chunk-cache/od/od9OsntjHtKFyxdK9o0wHatZJr2q3CHFqEhdBihlpplkZWZhdWx0/AAAAAF8EAADNz_4DAAAAAP64LNg=
A /root/.cache/huggingface/xet/https___cas_serv-tGqkUaZf_CBPHQ6h/chunk-cache/qy
A /root/.cache/huggingface/xet/https___cas_serv-tGqkUaZf_CBPHQ6h/chunk-cache/qy/qyuceiKU3J1d7TrwXDs3ASbqyCCM3yVElRgiMdEt66lkZWZhdWx0
A /root/.cache/huggingface/xet/https___cas_serv-tGqkUaZf_CBPHQ6h/chunk-cache/qy/qyuceiKU3J1d7TrwXDs3ASbqyCCM3yVElRgiMdEt66lkZWZhdWx0/AAAAAHkEAADeAAAEAAAAACYC7xQ=
A /root/.cache/huggingface/xet/https___cas_serv-tGqkUaZf_CBPHQ6h/chunk-cache/Be
A /root/.cache/huggingface/xet/https___cas_serv-tGqkUaZf_CBPHQ6h/chunk-cache/Be/Be5U__6xLjTwY2dGj28ZipKQ3bIn4tOzAEycfsxxBytkZWZhdWx0
A /root/.cache/huggingface/xet/https___cas_serv-tGqkUaZf_CBPHQ6h/chunk-cache/Be/Be5U__6xLjTwY2dGj28ZipKQ3bIn4tOzAEycfsxxBytkZWZhdWx0/AAAAAEkEAAC9kv8DAAAAAE0fZ5o=
A /root/.cache/huggingface/xet/https___cas_serv-tGqkUaZf_CBPHQ6h/chunk-cache/dm
A /root/.cache/huggingface/xet/https___cas_serv-tGqkUaZf_CBPHQ6h/chunk-cache/dm/dm5Zqs_3OtGmu3lc0184QAkxbe1jxA-WqTaRALGXPklkZWZhdWx0
A /root/.cache/huggingface/xet/https___cas_serv-tGqkUaZf_CBPHQ6h/chunk-cache/dm/dm5Zqs_3OtGmu3lc0184QAkxbe1jxA-WqTaRALGXPklkZWZhdWx0/AAAAAF8EAABp1P4DAAAAAGVS6OI=
A /root/.cache/huggingface/xet/https___cas_serv-tGqkUaZf_CBPHQ6h/chunk-cache/j5
A /root/.cache/huggingface/xet/https___cas_serv-tGqkUaZf_CBPHQ6h/chunk-cache/j5/j5QREl8B6Lkv43-XmVwULpGelCSYenXizGedBwxs_CVkZWZhdWx0
A /root/.cache/huggingface/xet/https___cas_serv-tGqkUaZf_CBPHQ6h/chunk-cache/j5/j5QREl8B6Lkv43-XmVwULpGelCSYenXizGedBwxs_CVkZWZhdWx0/AAAAAM8BAACq3bwBAAAAAN10mQI=
A /root/.cache/huggingface/xet/https___cas_serv-tGqkUaZf_CBPHQ6h/chunk-cache/mU
A /root/.cache/huggingface/xet/https___cas_serv-tGqkUaZf_CBPHQ6h/chunk-cache/mU/mU9ff3OwN00lUe7BVN0tjMKx0xyfS4dkgudFuN5Qcf5kZWZhdWx0
A /root/.cache/huggingface/xet/https___cas_serv-tGqkUaZf_CBPHQ6h/chunk-cache/mU/mU9ff3OwN00lUe7BVN0tjMKx0xyfS4dkgudFuN5Qcf5kZWZhdWx0/pgIAAEAEAAACG2wBAAAAANFgWSQ=
A /root/.cache/huggingface/xet/https___cas_serv-tGqkUaZf_CBPHQ6h/chunk-cache/KK
A /root/.cache/huggingface/xet/https___cas_serv-tGqkUaZf_CBPHQ6h/chunk-cache/KK/KKVfaRbt_BdXGhHcODFnBNuKrQB-Ju9D-_0_aXoCV-tkZWZhdWx0
A /root/.cache/huggingface/xet/https___cas_serv-tGqkUaZf_CBPHQ6h/chunk-cache/KK/KKVfaRbt_BdXGhHcODFnBNuKrQB-Ju9D-_0_aXoCV-tkZWZhdWx0/AAAAAD8EAADRwP8DAAAAAEFeCEc=
A /root/.cache/huggingface/xet/https___cas_serv-tGqkUaZf_CBPHQ6h/chunk-cache/Pb
A /root/.cache/huggingface/xet/https___cas_serv-tGqkUaZf_CBPHQ6h/chunk-cache/Pb/Pb_m4TWbGwjQPSyCFTRpWzvD6yC1FfLzVAQowsZt3whkZWZhdWx0
A /root/.cache/huggingface/xet/https___cas_serv-tGqkUaZf_CBPHQ6h/chunk-cache/Pb/Pb_m4TWbGwjQPSyCFTRpWzvD6yC1FfLzVAQowsZt3whkZWZhdWx0/AAAAAFcEAAB7av8DAAAAAEycsBg=
A /root/.cache/huggingface/xet/https___cas_serv-tGqkUaZf_CBPHQ6h/chunk-cache/Pb/Pbyy26nh3DpFW7pEp39YCUP-w7g-QxNjQvD-QruGW8VkZWZhdWx0
A /root/.cache/huggingface/xet/https___cas_serv-tGqkUaZf_CBPHQ6h/chunk-cache/Pb/Pbyy26nh3DpFW7pEp39YCUP-w7g-QxNjQvD-QruGW8VkZWZhdWx0/AAAAAEcEAAATJv4DAAAAANaNUPk=
A /root/.cache/huggingface/xet/https___cas_serv-tGqkUaZf_CBPHQ6h/chunk-cache/RT
A /root/.cache/huggingface/xet/https___cas_serv-tGqkUaZf_CBPHQ6h/chunk-cache/RT/RTpkx9NeoIk39u35xxDRAvsM9XOpbAJLYMvW0ILnJBNkZWZhdWx0
A /root/.cache/huggingface/xet/https___cas_serv-tGqkUaZf_CBPHQ6h/chunk-cache/RT/RTpkx9NeoIk39u35xxDRAvsM9XOpbAJLYMvW0ILnJBNkZWZhdWx0/AAAAAEMEAAD6Zv8DAAAAAG6LnAI=
A /root/.cache/huggingface/xet/https___cas_serv-tGqkUaZf_CBPHQ6h/chunk-cache/4q
A /root/.cache/huggingface/xet/https___cas_serv-tGqkUaZf_CBPHQ6h/chunk-cache/4q/4qBZkr6E2NaItLKOBjZFLVmKFUKEWv5v0trker1H4UNkZWZhdWx0
A /root/.cache/huggingface/xet/https___cas_serv-tGqkUaZf_CBPHQ6h/chunk-cache/4q/4qBZkr6E2NaItLKOBjZFLVmKFUKEWv5v0trker1H4UNkZWZhdWx0/AAAAAGEEAABwL_4DAAAAAJvoBXQ=
A /root/.cache/huggingface/xet/https___cas_serv-tGqkUaZf_CBPHQ6h/chunk-cache/O8
A /root/.cache/huggingface/xet/https___cas_serv-tGqkUaZf_CBPHQ6h/chunk-cache/O8/O8FCq7p60Q4xI5Qugru_7CK9Zf68KSrcCtTVSvbZyTNkZWZhdWx0
A /root/.cache/huggingface/xet/https___cas_serv-tGqkUaZf_CBPHQ6h/chunk-cache/O8/O8FCq7p60Q4xI5Qugru_7CK9Zf68KSrcCtTVSvbZyTNkZWZhdWx0/AAAAAIECAABxsGACAAAAACIZrCA=
A /root/.cache/huggingface/xet/https___cas_serv-tGqkUaZf_CBPHQ6h/chunk-cache/er
A /root/.cache/huggingface/xet/https___cas_serv-tGqkUaZf_CBPHQ6h/chunk-cache/er/ereter7Ra9NbwHga72QobDjKC2UTL_iveNjYuviwlbBkZWZhdWx0
A /root/.cache/huggingface/xet/https___cas_serv-tGqkUaZf_CBPHQ6h/chunk-cache/er/ereter7Ra9NbwHga72QobDjKC2UTL_iveNjYuviwlbBkZWZhdWx0/AAAAAB4BAAAzQQIBAAAAAOjs7KI=
A /root/.cache/huggingface/xet/https___cas_serv-tGqkUaZf_CBPHQ6h/chunk-cache/r5
A /root/.cache/huggingface/xet/https___cas_serv-tGqkUaZf_CBPHQ6h/chunk-cache/r5/r5zL_Fi8yjDIiZnCnS_YyWHaFinkBRg7KsgtI5Bml8xkZWZhdWx0
A /root/.cache/huggingface/xet/https___cas_serv-tGqkUaZf_CBPHQ6h/chunk-cache/r5/r5zL_Fi8yjDIiZnCnS_YyWHaFinkBRg7KsgtI5Bml8xkZWZhdWx0/AAAAAEsEAABQr_8DAAAAAMdldTU=
A /root/.cache/huggingface/xet/https___cas_serv-tGqkUaZf_CBPHQ6h/chunk-cache/y5
A /root/.cache/huggingface/xet/https___cas_serv-tGqkUaZf_CBPHQ6h/chunk-cache/y5/y55DKIwiRUxLpdxRPvqO9aBjMhwiBzNv9tkxyQsCoQ1kZWZhdWx0
A /root/.cache/huggingface/xet/https___cas_serv-tGqkUaZf_CBPHQ6h/chunk-cache/y5/y55DKIwiRUxLpdxRPvqO9aBjMhwiBzNv9tkxyQsCoQ1kZWZhdWx0/JwEAACwBAAAYYwQAAAAAAOlqXFM=
A /root/.cache/huggingface/xet/https___cas_serv-tGqkUaZf_CBPHQ6h/chunk-cache/KH
A /root/.cache/huggingface/xet/https___cas_serv-tGqkUaZf_CBPHQ6h/chunk-cache/KH/KH3qmEVs8pqUsWMAstlYtuVuecffdaUoDtDfSLdyCIxkZWZhdWx0
A /root/.cache/huggingface/xet/https___cas_serv-tGqkUaZf_CBPHQ6h/chunk-cache/KH/KH3qmEVs8pqUsWMAstlYtuVuecffdaUoDtDfSLdyCIxkZWZhdWx0/AAAAAAgBAADfIfsAAAAAALl0eFY=
A /root/.cache/huggingface/xet/https___cas_serv-tGqkUaZf_CBPHQ6h/staging
A /root/.cache/matplotlib
A /root/.cache/matplotlib/fontlist-v390.json
A /root/.config
A /root/.config/matplotlib
C /usr
C /usr/src
A /usr/src/core
A /cache

Even though you write folder, you must mean container.
The characters at the beginning of each line indicate A = add ; C = changed

Thus, the data in /root/.cache and /root/.config is written as new files into the container file system directly. This is data you loose, when you remove the container (like it happens when you use a new image tag).

I don’t know the image, and can’t tell you whether it’s safe to bind a host folder into the container /root folder, or if you need to bind two separate folders for /root/.cache and /root/.config. It depends whether data/files exists in /root required by the container to work. If /root is otherwise empty, a single bind should be enough to store the cache outside the container filesystem.

True. I missed that. For some reason I remembered wrong and thought it would remove everything. but I see the issue was solved, so that’s great :slight_smile:

its still happening! I found one culprit container and removed it and that helped tremendously. But still happens every once in a while. How can I find out which container is causing overlay2 to grow just before the VM crashes due to no more space?

I am not sure what more we can say, but when you say “which container is causing overlay2 to grow” it means you have a container that writes to the container filesystem, so you can monitor the size of that. @meyay recommended using docker system df --verbose which shows that too, but you can also use docker container ls --size. Both command supports '––format json` if you want to parse a json file to find the responsible container more easily.

how did you find the last container that you could remove to solve the porblem?

This topic was automatically closed 30 days after the last reply. New replies are no longer allowed.