Nextcloud in Docker with mounted disk

Good afternoon,

I have a Nextcloud App running inside Docker and a problem with the storage. I have a Virtual Private Server with a 40GB SSD and an Ubuntu OS. Additionally I’ve mounted a 120GB disk. The problem is that now the 40GB SSD is completely full and the Nextcloud App is no longer accessible. Is there a solution to the problem? I would like to have that the Nextcloud App uses the 120GB disk instead of or additionally to the 40GB SSD. Is that possible and how to do that? Every help is appreciated :slight_smile:

Docker version 20.10.17
Ubuntu 20.04.4

Please let me know if I can provide further information and thank you for your help.

Given hat you’re asking on the Docker forum: I’m quite sure that telling us how you start Nextcloud (Dockerfile, Compose file, what else?) and maybe how you mounted the larger disk, will help people diving into this.

Hi, that’s the compose file I used:

# to start the service use:
# sudo docker network create proxy-tier
# sudo docker-compose up -d

# source: https://github.com/nextcloud/docker/tree/master/.examples/docker-compose/with-nginx-proxy/mariadb/apache


version: '3'

services:
  db:
    image: mariadb
    command: --transaction-isolation=READ-COMMITTED --binlog-format=ROW
    restart: always
    volumes:
      - db:/var/lib/mysql
    environment:
      - MYSQL_ROOT_PASSWORD=
    env_file:
      - db.env

  app:
    image: nextcloud:apache
    restart: always
    volumes:
      - nextcloud:/var/www/html
    environment:
      - VIRTUAL_HOST=
      - LETSENCRYPT_HOST=
      - LETSENCRYPT_EMAIL=
      - MYSQL_HOST=db
    env_file:
      - db.env
    depends_on:
      - db
    networks:
      - proxy-tier
      - default

  proxy:
    build: ./proxy
    restart: always
    ports:
      - 80:80
      - 443:443
    labels:
      com.github.jrcs.letsencrypt_nginx_proxy_companion.nginx_proxy: "true"
    volumes:
      - certs:/etc/nginx/certs:ro
      - vhost.d:/etc/nginx/vhost.d
      - html:/usr/share/nginx/html
      - /var/run/docker.sock:/tmp/docker.sock:ro
    networks:
      - proxy-tier

  letsencrypt-companion:
    image: jrcs/letsencrypt-nginx-proxy-companion
    restart: always
    volumes:
      - certs:/etc/nginx/certs
      - vhost.d:/etc/nginx/vhost.d
      - html:/usr/share/nginx/html
      - /var/run/docker.sock:/var/run/docker.sock:ro
    networks:
      - proxy-tier
    depends_on:
      - proxy

volumes:
  db:
  nextcloud:
  certs:
  vhost.d:
  html:

networks:
  proxy-tier:

First of all everything worked out fine. Some month later I mounted the disk to try a backup attempt. I did not work, however the disk remained mounted. I think I created a mount point and an entry in /etc/fstab and then mounted the disk using the mount command.

Thanks for your reply.

Your volumes are most likey stored underneath the docker data root folder, which typical is /var/lib/docker.

It is possible to use named volumes that use bind to point its data directory to an arbitrary directory on the host:

...
volumes:
  nextcloud:
     driver_opts:
           type: none
           o: bind
           device: /mnt/data/docker-volumes/nexcloud-data

device: need to definie the path where the data should be stored - which should be with the mountpoint where you mounted the 2nd blockdevice into. Make sure the path exist!

Now here it the thing: the configuration of a volume is immutable, as such if the configuration is changed in the compose file, it will still not be applied to the volume unless the volume is manualy deleted and re-created by docker-compose.

But since you probably already have data in the named volumes, you need to create a backup of them, then delete the volumes, create the new vooumes and restore the backup. Forum search should yield results about this topic, if you don’t already have a backup/restore strategie.

You could also just declare new volumes names for your bind baked volumes and then use a temporary one-shot container that mounts the old and new volume, to copy the data from the old volume to the new volume (make sure no containers that use either one of both volumes are running). This approach might be easier, but will result in a changed volume name.