Docker Community Forums

Share and learn in the Docker community.

Change disk assigned to container

Hello all,

im fairly new to docker containers and so far im loving it.

i decided to move to a container my instance of Nextcloud, but there is one small issue that has been preventing me to fully make the switch.

this is my yml file:

nextcloud:
image: nextcloud
container_name: Nextcloud
networks:
- network
ports:
- 8290:80
environment:
- NEXTCLOUD_ADMIN_USER=user
- NEXTCLOUD_ADMIN_PASSWORD=user_password
- NEXTCLOUD_DATA_DIR=/var/www/html/data
- NEXTCLOUD_TRUSTED_DOMAINS=‘a_domain.com’
- MYSQL_USER=ncddbuser
- MYSQL_PASSWORD=a_password
- MYSQL_DATABASE=nextcloud
- MYSQL_HOST=mariadb
volumes:
- /media/nextcloud/data:/data
- /media/nextcloud/config:/config
restart: always

it deploys fine alongside mariadb.

but when i go into the system on Nextcloud i get the below:

is there a way that i can move this disk to my external HDD so i can have 1TB of space allocated instead of the 55 GB?

the container is writing to the external HDD, but ill get only 55GB.

ive been looking all over but cant get past this.

can any of yall please help me getting this sorted out?

Thanks in advance!

Why would you configure it to use the wrong drive in the first place? You can always stop the container, move the data from /media/nextcloud/data to wherever you external drive is, change the “volume” mapping, restart the container and be good.

Word of warning: storing volume data on external drives is usualy a call for trouble.

/media/extcloud is my external hdd, but the container still shows only 55GB instead of the actual 1TB

If /media/nextcloud/data is on your external drive and mapped to /data inside the container, it should be correct. There is no magic involved: it literaly relies on mount --bind /media/nextcloud/data /data (where /data) is the targetfolder inside the container.

It writes some data to the external drive, but I can’t use more than 55GB and the external hdd (/media/Nextcloud) is 1tb

One more thing: the usb drive must be mounted before you start the container - and should not be unmounted, hibernate, sleep or anything that temporarily unmounts the drive. Think of it is as the container receives a pointer which it does not update again, regardless wether you mount or unmount something in the host path - the pointer will be still the same it was when the container was started even though it became stale.

I haven’t restarted the server in 5 days at least, so the hdd is still active.

I made sure that I can read/write to it before deploying the container.

But still, the Nextcloud container is still getting 55GB allocated instead of 1TB

Like I wrote: volumes on external drives are a call for trouble.

So, what would be the solution/workaround for having the container detecting the whole disk instead of just 55GB?

this is how its looking for my drives:

Screen Shot 2020-09-10 at 8.30.50 PM

as you can see, / is mounted under /dev/sda3 and the external HDD is /dev/sdb1

/dev/sda is the main SSD on the computer hosting the containers.

/dev/sdb1 would be the external HDD for the nextcloud data.

Something is not adding up.

Even though both is true, the screenshot does not confirm the claim you made earlier:

Please post the output of findmnt --target /media/nextcloud/data. If the command is not available, make sure to install the os package util-linux.

apologies on the confusion on the HDD, i thought this server had the 1TB hdd connected.
the actual HDD on the server for the containers is 256GB and the external HDD for hosting nextcloud data is 512GB.

this is the result of findmnt --target /media/nextcloud/data
Screen Shot 2020-09-11 at 8.08.27 AM

No surprise there :slight_smile: The external disk is simply not mounted in /media/nextcloud/data. Otherwise the target would be /dev/sdb1 instead of /.

This is not a docker problem. This is a “how do I permantently mount an external USB drive in my linux host” problem.

Are you sure? Your first screeshot looks more like 128gb , from which / has only ~59gb.

omg, totally noob error with mapping and with math.

you are correct, the host’s ssd is 128 GB, the external ssd is 512 GB.

i remapped the drive and now the container is seeing it completely…

thanks for pointing the error.

I assume you are talking about disk space to run your containers.

Make sure that you have enough space on whatever disk drive you are using for /var/lib/docker which is the default used by Docker. You can change it with the -g daemon option.

If you don’t have enough space you may have to repartition your OS drives so that you have over 15GB. If you are using boot2docker or docker-machine you will have to grow the volume on your Virtual Machine. It will vary depending on what you are using for Virtualization (i.e VirtualBox, VMware, etc)

For example if you are using VirtualBox and docker-machine you can start with something like this for a 40GB VM.

docker-machine create --driver virtualbox --virtualbox-disk-size “40000” default

yup, ive got external hdd that the container is using. but my main fault was that it was mounted, but not mounted on the correct folder.

once mounted to the correct mountpoint everything started working successfully… :slight_smile: