Docker cannot access above 100GB for container(s)

using docker version 25.0.4, build 1a576c5

Output of lsblk:

sda                         8:0    0 12.7T  0 disk
├─sda1                      8:1    0    1M  0 part
├─sda2                      8:2    0    2G  0 part /boot
└─sda3                      8:3    0 12.7T  0 part
  └─ubuntu--vg-ubuntu--lv 253:0    0 12.7T  0 lvm  /

I’m not too sure why this is occuring, and how but I recently installed an application via docker compose (Immich) and when using the application can only access a total of 97GB of data free. As this does not seem to be an application issue I was wondering if anyone on here has experienced this before and/or knows why this is occuring.

I have tried the following:

  • verified I have space available with lsblk
  • verified that the docker container(s) has space by running lsblk
  • verified that it is mounted correctly
  • purged and reinstalled the applications (clean slate)
  • confirmed the upload location is correct and pointing to a directory

My docker-compose.yml is as follows:

version: "3.8"
name: immich

services:
  immich-server:
    container_name: immich_server
    image: ghcr.io/immich-app/immich-server:${IMMICH_VERSION:-release}
    command: [ "start.sh", "immich" ]
    volumes:
      - ${UPLOAD_LOCATION}:/usr/src/app/upload
      - /etc/localtime:/etc/localtime:ro
    env_file:
      - .env
    ports:
      - 2283:3001
    depends_on:
      - redis
      - database
    restart: always
    networks:
      - apps

  immich-microservices:
    container_name: immich_microservices
    image: ghcr.io/immich-app/immich-server:${IMMICH_VERSION:-release}
    command: [ "start.sh", "microservices" ]
    volumes:
      - ${UPLOAD_LOCATION}:/usr/src/app/upload
      - /etc/localtime:/etc/localtime:ro
    env_file:
      - .env
    depends_on:
      - redis
      - database
    restart: always
    networks:
      - apps

  immich-machine-learning:
    container_name: immich_machine_learning
    volumes:
      - model-cache:/cache
    env_file:
      - .env
    restart: always
    networks:
      - apps

  redis:
    container_name: immich_redis
    image: registry.hub.docker.com/library/redis:6.2-alpine@sha256:51d6c56749a4243096327e3fb964a48ed92254357108449cb6e23999c37773c5
    restart: always
    networks:
      - apps

  database:
    container_name: immich_postgres
    image: registry.hub.docker.com/tensorchord/pgvecto-rs:pg14-v0.2.0@sha256:90724186f0a3517cf6914295b5ab410db9ce23190a2d9d0b9dd6463e3fa298f0
    environment:
      POSTGRES_PASSWORD: ${DB_PASSWORD}
      POSTGRES_USER: ${DB_USERNAME}
      POSTGRES_DB: ${DB_DATABASE_NAME}
    volumes:
      - pgdata:/var/lib/postgresql/data
    restart: always
    networks:
      - apps

volumes:
  pgdata:
  model-cache:
networks:
    apps:
      external: true

The compose file will not help in this case. Compose is just another client. Please, share how you installed Docker. Is it Docker Engine on Linux or Docker Desktop? Docker Desktop would create a utility VM which would have a maximum size of course.

1 Like

I am running Ubuntu Server 22.04.4 LTS, with Docker Engine.

I installed via the following method:

  • sudo apt install apt-transport-https ca-certificates curl software-properties-common
  • curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key add -
  • sudo add-apt-repository "deb [arch=amd64] https://download.docker.com/linux/ubuntu focal stable"
  • apt-cache policy docker-ce
  • sudo apt install docker-ce

and for docker compose i used wget to download the latest release from github and placed it in my /usr/local/bin folder with permissions wxr-xr-x

How do you know that? Did you check disk sizes inside the container?. Yes, you did. sorry.

So the first question is still a question. If the container see you have space, I don’t know how an application could not use it.

1 Like

I have another server, I will try installing the same image on that server and see if the problem persists, just incase it is infact an application issue. But given I have not seen any issues on their repository regarding this and how popular this application is, I just deemed it as highly unlikley. So this might be a new issue.
Edit: Storage is recognized properly on my other server. Also the server which is having the issue is fresh install of Ubuntu Server so I do not know why this problem persists

The main reason why this really confused me is that I am infact mounting into the file system, never seen this before and I could not find too much about this online.

Output of lsblk in /var/lib/docker

root@lenovo:/var/lib/docker# lsblk
NAME                      MAJ:MIN RM  SIZE RO TYPE MOUNTPOINTS
loop0                       7:0    0 63.9M  1 loop /snap/core20/2105
loop1                       7:1    0   87M  1 loop /snap/lxd/27037
loop2                       7:2    0   87M  1 loop /snap/lxd/27428
loop3                       7:3    0 40.4M  1 loop /snap/snapd/20671
loop4                       7:4    0 39.1M  1 loop /snap/snapd/21184
loop5                       7:5    0 63.9M  1 loop /snap/core20/2182
sda                         8:0    0 12.7T  0 disk
├─sda1                      8:1    0    1M  0 part
├─sda2                      8:2    0    2G  0 part /boot
└─sda3                      8:3    0 12.7T  0 part
  └─ubuntu--vg-ubuntu--lv 253:0    0 12.7T  0 lvm  /
root@lenovo:/var/lib/docker#

Let me know if you would like me to run anymore commands to provide you with anymore information by the way.

Instead of lsblk, try to run the df command in the container:

df -h

Also please, share the output of

docker info
1 Like

I found the issue. I’m not too sure why I did not run ‘df -h’ but it revealed that my file system was only allocated 98GB.

Running ‘df -h’ on my main machine yields:

Thank you Akos. It’s always the simplest mistakes smh :person_facepalming:.