I used portainer to set an immich container. After processing 100K + photos the total Storage space exceeded 300GB. I then deleted the container but the space has been not freed from the system. df -h also shows 344G whereas ncdu 46.8GiB.
Please share how exactly you created the container. If you use a volume or use a bind to a host folder, the files could still exist.
Furthermore, please let us know which os you use. Since Docker Desktop always runs in a utility vm, the virtual harddisk of this vm probably didn’t shrink after files have been deleted in it.
I used this yaml file from the official immich site:
/mnt is the mountpoint of a Synolgoy server via NFS.
#
# WARNING: To install Immich, follow our guide: https://docs.immich.app/install/docker-compose
#
# Make sure to use the docker-compose.yml of the current release:
#
# https://github.com/immich-app/immich/releases/latest/download/docker-compose.yml
#
# The compose file on main may not be compatible with the latest release.
name: immich
services:
immich-server:
container_name: immich_server
image: ghcr.io/immich-app/immich-server:${IMMICH_VERSION:-release}
# extends:
# file: hwaccel.transcoding.yml
# service: cpu # set to one of [nvenc, quicksync, rkmpp, vaapi, vaapi-wsl] for accelerated transcoding
volumes:
# Do not edit the next line. If you want to change the media storage location on your system, edit the value of UPLOAD_LOCATION in the .env file
- ${UPLOAD_LOCATION}:/data
- /etc/localtime:/etc/localtime:ro
- /mnt:/mnt
env_file:
- stack.env
ports:
- '2283:2283'
depends_on:
- redis
- database
restart: always
healthcheck:
disable: false
immich-machine-learning:
container_name: immich_machine_learning
# For hardware acceleration, add one of -[armnn, cuda, rocm, openvino, rknn] to the image tag.
# Example tag: ${IMMICH_VERSION:-release}-cuda
image: ghcr.io/immich-app/immich-machine-learning:${IMMICH_VERSION:-release}
# extends: # uncomment this section for hardware acceleration - see https://docs.immich.app/features/ml-hardware-acceleration
# file: hwaccel.ml.yml
# service: cpu # set to one of [armnn, cuda, rocm, openvino, openvino-wsl, rknn] for accelerated inference - use the `-wsl` version for WSL2 where applicable
volumes:
- model-cache:/cache
env_file:
- stack.env
restart: always
healthcheck:
disable: false
redis:
container_name: immich_redis
image: docker.io/valkey/valkey:8-bookworm@sha256:fea8b3e67b15729d4bb70589eb03367bab9ad1ee89c876f54327fc7c6e618571
healthcheck:
test: redis-cli ping || exit 1
restart: always
database:
container_name: immich_postgres
image: ghcr.io/immich-app/postgres:14-vectorchord0.4.3-pgvectors0.2.0@sha256:41eacbe83eca995561fe43814fd4891e16e39632806253848efaf04d3c8a8b84
environment:
POSTGRES_PASSWORD: ${DB_PASSWORD}
POSTGRES_USER: ${DB_USERNAME}
POSTGRES_DB: ${DB_DATABASE_NAME}
POSTGRES_INITDB_ARGS: '--data-checksums'
# Uncomment the DB_STORAGE_TYPE: 'HDD' var if your database isn't stored on SSDs
# DB_STORAGE_TYPE: 'HDD'
volumes:
# Do not edit the next line. If you want to change the database storage location on your system, edit the value of DB_DATA_LOCATION in the .env file
- ${DB_DATA_LOCATION}:/var/lib/postgresql/data
shm_size: 128mb
restart: always
volumes:
model-cache:
> uname -a
Linux archlinux 6.17.1-arch1-1 #1 SMP PREEMPT_DYNAMIC Mon, 06 Oct 2025 18:48:29 +0000 x86_64 GNU/Linux
I can’t speak about Portainer, but if you remove a compose deployement with docker compose down -v, it will remove the volumes as well. My experience with Portainer is from years ago, If I remember right volumes not bound to a container are marked as “unsued” .
Note: the only visible volume is model-cache, everything else is a bind, where the host folder is bind mounted into a container folder. Neither of both writes data into the container filesystem.
Files created or modified in the container filesystem that are not bound by a volume will write data into the container filesystem. Though, the container filesystem is gone when the container is deleted.
You can use docker diff <container name or id> to found out if the container writes data into the container filesystem.
Thus, said where exactly is the data located that takes up to much space? Furthermore, can you share the output of docker info, so we can see which storage driver is used. I never worked with Docker and the btrfs filesystem/storage driver. From what I remember it works with snapshots.
> sudo docker info
[sudo] password for chris:
Client:
Version: 28.5.1
Context: default
Debug Mode: false
Server:
Containers: 6
Running: 6
Paused: 0
Stopped: 0
Images: 6
Server Version: 28.5.1
Storage Driver: overlay2
Backing Filesystem: btrfs
Supports d_type: true
Using metacopy: true
Native Overlay Diff: false
userxattr: false
Logging Driver: json-file
Cgroup Driver: systemd
Cgroup Version: 2
Plugins:
Volume: local
Network: bridge host ipvlan macvlan null overlay
Log: awslogs fluentd gcplogs gelf journald json-file local splunk syslog
CDI spec directories:
/etc/cdi
/var/run/cdi
Swarm: inactive
Runtimes: io.containerd.runc.v2 runc
Default Runtime: runc
Init Binary: docker-init
containerd version: 75cb2b7193e4e490e9fbdc236c0e811ccaba3376.m
runc version:
init version: de40ad0
Security Options:
seccomp
Profile: builtin
cgroupns
Kernel Version: 6.17.1-arch1-1
Operating System: Arch Linux
OSType: linux
Architecture: x86_64
CPUs: 24
Total Memory: 30.46GiB
Name: archlinux
ID: 287bb810-05dd-4827-b142-1e9ce6d85c59
Docker Root Dir: /var/lib/docker
Debug Mode: false
Experimental: false
Insecure Registries:
::1/128
127.0.0.0/8
Live Restore Enabled: false
And did you confirm that the used space was used by the container?
The title contains “Docker volume space”. Do you think the space is used by a Docker volume?
You can try running
docker system df
and
docker system df -v
to find out what is using the space.
I now used su - and then df -h. The disk usage I see is in /var/lib/docker/overlay2/
How can I safely clean it up?
I tried sudo rm -rf /var/lib/docker/ and df -h still shows 520Gb of allocated disk space.
If you don’t want to lose data, do not touch the Docker data root. I shared the command you could use to see what is using the space. The built-in Linux command “df” can only tell which folder contains the data, but not exactly what that data is. Docker has its built-in commands as well. You should only use those, otherwise you would likely break your Docker installation. The overlay2 folder contains all the overlay filesystem, including images and containers, but not metadata like image tags and container metadata, so you could see you have images when you deleted everything.
Of course, if you just wanted to clean up the filesystem to find out what that huge amount of data was and you didn’t need to keep anything, that’s okay and you can experiment.
Regarding the alocated disk space, if yo deleted everything and the space is still allocated, I don’t know how that could be. Maybe different folders are mounted from the same disk is mounted to multuple targets and you see the disk is still filled with data, just not in that folder. Are there other mount points?
If it turns out to be a btrfs issue, I will not be a big help I’m afraid, but then it can be reported on GitHub.
What I originally did was to mount an NFS share from my TrueNAS server and run immich on my daily driver to organize my photo collection. Machine learning etc used a couple of hundred gigabytes. df -h after su - showed that the space was used my overlay2. I deleted it as I experimented with immich.
df -h and ncdu show different results. I deleted all my system snapshots to unallocate the space and then tried btrfs rebalance but with no success.
ncdu shows 50Gb disk used but df -h shows 350Gb
ncdu works with folders and df works with disks, so it also indicates that there are still files on the disk you just can’t see those.
Just to make sure I understand it: You originally used NFS to store the docker data root and when you saw it used a lot of space, you totally switched to btrfs without using NFS?
I just mounted the NFS share and used it as an external library for immich which was installed with docker/portainer in my system. The immich data were on my PC that runs Arch Linux with btrfs.
Problem resolved. It had to do with btrfs and not docker.