I hope that there are any person in this forum that can help me with my problem.
I tried to change the filefomat to xfs by following this instructions on github, then i could add the following to my docker-compose.yml file and change the docker version to 2.2.
storage_opt:
size: 100M
The only problem with this is that it only affect the container not the volume. So if i add a file to a volume then it will not affect the specified size of the container.
Not directly an answer to the question you’ve asked, but have you considered storing your docker containers in a virtual machine? If you have the platform for it, you can simply specify how much storage space the VM gets. I use VMWare’s Photon OS as a lightweight Docker container environment, and I’m able to set limits on my disk space usage at the virtual layer.
I run docker on a vps with debian 10 and it is possible to have each volume on an external disk, but i thing that there are better ways to solve this issue.
It seems like i need to limit the volume size not the container so i misunderstood it a little bit.
I found a volume plugin called Trident that has the ability to limit the volume size. I have not tested it yet, but i will inform you if it works for me.
You are aware that some volume plugins are dedicated to hardware strorages, aren’t you? I am afraid a netapp volume plugin will work with nothing else than a Netapp storage.
Word of advise: test the volume plugin in a sandboxed vm first… a missbehaving volume plugin can be very frustrating. Make sure it does what you need before you install it into your productive system.
There are plent of volume plugins that allow to limit the size You need a volume plugin that allows to limit the size. I personaly am a huge fan of StorageOS - which is easy to setup in single node szenario - Though, it’s swarm use is not supported or documented… but still works like a charme.
Getting StorageOS to work reliable is more then just executing the single docker run command.
You will want to install it as systemd service, resulting in all preconditions for the start of the StorageOS plugin to be executed before the plugin container is started after a reboot. At the bottom of the page, there is a link to a github repo, which holds a ansible playbook that sets up everything. You will also require to download the storageos cli tool.
A single node operation is possible and actualy not that hard when you use the ansible playbook.
the docker-compose.yml volume declaration looks like this:
data:
driver: storageos
driver_opts:
# the description helps to distunginsh the volume in the storageos ui
description: 'whatever describes the volumes usage'
# If you want to use anything else then the default pool, you need to create it in the ui.
pool: 'default'
# valid filesystems are ext4 and xfs
fstype: 'ext4'
# integer value for the volume size in G
size: 1
# example labels, see https://docs.storageos.com/docs/reference/labels#storageos-volume-labels for the full documentation
storageos.com/replicas: 1
storageos.com/failure.mode: alwayson
Thanks for sharing what you discovered. I’m not sure if I follow why external storage is a less-desirable option tho. My customers generally host their VMs locally on a vhost rather than a VPS, so maybe I’m unaware that the VPS provider charges more for extra virtual disks? Unless there’s an additional charge for extra disks, or you’re doing a lot of moving data from one disk to another, attaching VMDKs for segmenting storage allows you to lock/expand the capacity of each separately and prevents one from filling up another (IE: user uploads too many files and fills up the database volume). It also gives you some options on restoring from backups, in that you can restore a single VMDK (maybe the database crashed and needs reverted to last backup but the file storage is still viable and no need to force users to re-upload files). And of course, not an issue for VPS, but on locally hosted solutions I like to put the root onto SSD media for faster performance, and bulky data storage can go on cheaper HDD-based media. And if you can manage storage at the virtual level, there’s less need to deploy and support complicated services.