Limit size of docker container

Hi

Is there any way to limit a docker containers size including volumes in docker.

I have tried tried to create a volume like this:

docker volume create --driver local \
--opt type=tmpfs \
--opt device=tmpfs \
--opt o=size=200m \
wordpress_wp

docker volume create --driver local \
--opt type=tmpfs \
--opt device=tmpfs \
--opt o=size=200m \
wordpress_db

My docker-compose file is looking like this:

version: '3.1'
services:
  wordpress:
    image: wordpress
    restart: always
    ports:
      - 8000:80
    environment:
      WORDPRESS_DB_HOST: db
      WORDPRESS_DB_USER: exampleuser
      WORDPRESS_DB_PASSWORD: examplepass
      WORDPRESS_DB_NAME: exampledb
    volumes:
      - "wordpress_wp:/var/www/html"

  db:
    image: mariadb:10.3
    restart: always
    environment:
      MYSQL_DATABASE: exampledb
      MYSQL_USER: exampleuser
      MYSQL_PASSWORD: examplepass
      MYSQL_RANDOM_ROOT_PASSWORD: '1'
    volumes:
      - "wordpress_db:/var/lib/mysql"      

volumes:
  wordpress_wp:
    external: true
  wordpress_db:
    external: true  

The problem with using volumes with tmpfs is that the data in the volume is deleted after deleting or restarting a docker container.

Is there any other ways to limit a size of a docker volume without losing the data when restart the container?

Hi

I hope that there are any person in this forum that can help me with my problem.

I tried to change the filefomat to xfs by following this instructions on github, then i could add the following to my docker-compose.yml file and change the docker version to 2.2.

storage_opt:
  size: 100M

The only problem with this is that it only affect the container not the volume. So if i add a file to a volume then it will not affect the specified size of the container.

Not directly an answer to the question you’ve asked, but have you considered storing your docker containers in a virtual machine? If you have the platform for it, you can simply specify how much storage space the VM gets. I use VMWare’s Photon OS as a lightweight Docker container environment, and I’m able to set limits on my disk space usage at the virtual layer.

Thank you for your answers.

I run docker on a vps with debian 10 and it is possible to have each volume on an external disk, but i thing that there are better ways to solve this issue.

It seems like i need to limit the volume size not the container so i misunderstood it a little bit.

I found a volume plugin called Trident that has the ability to limit the volume size. I have not tested it yet, but i will inform you if it works for me.

You are aware that some volume plugins are dedicated to hardware strorages, aren’t you? I am afraid a netapp volume plugin will work with nothing else than a Netapp storage.

Word of advise: test the volume plugin in a sandboxed vm first… a missbehaving volume plugin can be very frustrating. Make sure it does what you need before you install it into your productive system.

There are plent of volume plugins that allow to limit the size You need a volume plugin that allows to limit the size. I personaly am a huge fan of StorageOS - which is easy to setup in single node szenario - Though, it’s swarm use is not supported or documented… but still works like a charme.

The plugin i referred to seems to not working for local storage as @meyay said.

I tried to install StorageOS using the official documentation, but i can’t get it working.
But StorageOS seems to be exactly what i need.

I can’t figure out what i am doing wrong, but if it is possible i want to run it without cluster setup.

Getting StorageOS to work reliable is more then just executing the single docker run command.

You will want to install it as systemd service, resulting in all preconditions for the start of the StorageOS plugin to be executed before the plugin container is started after a reboot. At the bottom of the page, there is a link to a github repo, which holds a ansible playbook that sets up everything. You will also require to download the storageos cli tool.

A single node operation is possible and actualy not that hard when you use the ansible playbook.

the docker-compose.yml volume declaration looks like this:

  data:
    driver: storageos
    driver_opts:
      # the description helps to distunginsh the volume in the storageos ui
      description: 'whatever describes the volumes usage'
      # If you want to use anything else then the default pool, you need to create it in the ui. 
      pool: 'default'
      # valid filesystems are ext4 and xfs
      fstype: 'ext4'
      # integer value for the volume size in G
      size: 1
      # example labels, see https://docs.storageos.com/docs/reference/labels#storageos-volume-labels for the full documentation
      storageos.com/replicas: 1
      storageos.com/failure.mode: alwayson

Thank you for your answer.

Unfortunately i couldn’t get the installation working and there was no error messages that could help me.

I decided to use separate partitions on a external hard drive, maybe not the best solution but it works.

Thanks for sharing what you discovered. I’m not sure if I follow why external storage is a less-desirable option tho. My customers generally host their VMs locally on a vhost rather than a VPS, so maybe I’m unaware that the VPS provider charges more for extra virtual disks? Unless there’s an additional charge for extra disks, or you’re doing a lot of moving data from one disk to another, attaching VMDKs for segmenting storage allows you to lock/expand the capacity of each separately and prevents one from filling up another (IE: user uploads too many files and fills up the database volume). It also gives you some options on restoring from backups, in that you can restore a single VMDK (maybe the database crashed and needs reverted to last backup but the file storage is still viable and no need to force users to re-upload files). And of course, not an issue for VPS, but on locally hosted solutions I like to put the root onto SSD media for faster performance, and bulky data storage can go on cheaper HDD-based media. And if you can manage storage at the virtual level, there’s less need to deploy and support complicated services.