Docker Community Forums

Share and learn in the Docker community.

Increase performance: loading containers into RAM/SSD

Hi

I have kind of a rather general issue - I think. In order to understand my issue I give you a quick overview of my system:

Hardware:

  • ZBOX CI642 nano (former version with i5-8250U)
    120 GB SSD SAMSUNG
    32 GB RAM (+/- 5GB used with normal load, never exceeds 7GB)
    ICY BOX external storage with 4x3TB HDD in RAID 5 connected via USB3.0

Software:

  • Ubuntu 20.04 including latest updates
    Docker Version (probably latest - I am on vacation and cannot access this info right now)
    deployment via docker-compose

Setup:
All data including docker-compose files are stored on the external storage. All commands are run in the folder structure of the external storage.

Issue:
As you can imagine: external storage with HDD in RAID 5 is not the most performant setup. (But it gives me size and access flexibility with an optimized data safety vs. usable storage trade-off.) On the other hand I have space to spare on RAM and SSD!

Question(s):

  • Is there a way to add container options in docker-compose to load a container (or volume) directly into RAM/SSD (e.g. mariadb, influxdb) and write the data back to the external storage e.g. once every hour? Fully aware that I’ll lose max. 59mins. of data if my server goes byebye in the meantime.

  • If not with docker-compose is it possible with any other tool (swarm, kubernetes)?

  • Any other ideas to increase performance?? Besides changing hardware.

I probably have asked questions which I may figure out with enough research. I’m sorry about that. My hobby has not enough time allotted for the next few years - I love spending time with my wife and little kids too. If you can point me in the right direction or answer the question directly I’d very much appreciate it.

Thank you.

That’s not going to work - and I highly doubt this endavor would bring the expected benefit. If it would be usefull, it is high likely that it would’ve been implemented already.

Though, what will work and is supported ootb is to use volumes with your container that point to your ssd. At least you could speed up stateteless data for your containers.

OpenShift Container Platform requires a fully functional DNS server in the environment. This is ideally a separate host running DNS software and can provide name resolution to hosts and containers running on the platform.

Adding entries into the /etc/hosts file on each host is not enough. This file is not copied into containers running on the platform.
Key components of OpenShift Container Platform run themselves inside of containers and use the following process for name resolution:

By default, containers receive their DNS configuration file (/etc/resolv.conf) from their host.

OpenShift Container Platform then sets the pod’s first nameserver to the IP address of the node.

As of OpenShift Container Platform 3.2, dnsmasq is automatically configured on all masters and nodes. The pods use the nodes as their DNS, and the nodes forward the requests. By default, dnsmasq is configured on the nodes to listen on port 53, therefore the nodes cannot run any other type of DNS application.

NetworkManager, a program for providing detection and configuration for systems to automatically connect to the network, is required on the nodes in order to populate dnsmasq with the DNS IP addresses.

NM_CONTROLLED is set to yes by default. If NM_CONTROLLED is set to no, then the NetworkManager dispatch script does not create the relevant origin-upstream-dns.conf dnsmasq file, and you would need to configure dnsmasq manually.

Similarly, if the PEERDNS parameter is set to no in the network script, for example, /etc/sysconfig/network-scripts/ifcfg-em1, then the dnsmasq files are not generated, and the Ansible install will fail. Ensure the PEERDNS setting is set to yes.

I think this could help which explains how you could define the folder where the containers are created: https://stackoverflow.com/questions/29149018/move-default-docker-container-to-another-place-on-the-disk

My approach to this is to create a script that moves your source+docker-compose files to SSD or RAM (tmp/ folder?) so you set the context of your running container there.

To create a copy to save in your RAID storage, you might need to stop the container, commit the changes and export it using “docker export”. As an alternative, you don’t have to do any of this if you mount all your mutable files/folders to points in your fast storage. These mount files will have the latest copy. Mounting however, makes me think you will run into some sort of race condition if you try to create backups of those files into your RAID. Maybe try rsync to sync those mount files to your raid? You will need to investigate those two options and tell us.

I also think you might want to explore volume drivers and in addition, explore this section of Docker docs related to volume backups, which is sort of what you are hinting in your challenge above: https://docs.docker.com/storage/volumes/#backup-restore-or-migrate-data-volumes

Cheers,

Additionaly, you could move docker’s data-root folder to the ssd drive. As a result all images, containers, and docker internal configuration will be stored on it.

If the filesystem of the old and new folder are identical (from ext4 to xfs or vise versa is also fine), you can simply stop the docker service and copy /var/lib/docker to the new location on ssd, configure the new location on ssd as data-root, restart the docker service and enjoy.