70.000 docker containers with docker swarm

Hi Docker-experts,

I would like to know if you´ve got some ideas for the following use-case: I am currently evaluating routing algorithms in a testbed with Quagga containers that speak with each other. I would like to spawn around 74.000 containers, each having a Quagga instance running with its own configuration file.

I could easily scale this to around 500 containers, using Docker Compose on a local VM. However, when I try to spawn more instances the VM/server freezes. I do not think that this is due to resource-constraints. It has rather something to do with OS constraints as every container creates a network interface.

That´s why I was wondering about Docker Swarm or Kubernetes. Since it is probably easier to manage and to deploy, I´d like to go with Docker swarm first. My idea would be to spawn 148 VMs, each connecting to the Docker Swarm manager. Next, I would like to deploy the same Quagga container image to all nodes, 500 containers per node. The server I am using has 112 cores and 780GB RAM available. If this should not be sufficient, I am also able to obtain more resources.

One problem I am facing is that each Quagga instance would need to be started with a different config file. This has not been a problem so far, since I could simply change the config file entry for each container in the docker compose file. However, as I understand Docker swarm, I would have to provide one image which gets cloned to the different nodes and I am unsure how to go about the config file. How could I deploy the same container but with a different config file for each container?

There might be other scaling problems. Are there any you are anticipating?
I have already read that the docker swarm virtual switch only supports 512 entries, I would have to increase that limit. Are there more such obstacles?

Thanks in advance!

Cheers,
Nils

I have never had to deal with that amount of containers, so I hope someone more experienced will come later.

Even though I have not needed to run so many containers yet, I had to optimize performance. Among other things I had to make sure that the number of files that can be open simultaneously has proper limits. I don’t remember all of the things I did, but I found something that describes something similar and probably even more:

Of course Docker Swarm could be an additional help too, since you could use multiple machines. I don’t know Quagga either. Please, confirm that if I found the right Quagga so other users can join the conversation faster:

Since I don’t know it and I don’t know how it uses CPU, memory and the HDD/SSD, I don’t know what you need to run so many.

Hi Akos,

absolutely - thats the Quagga daemon I am talking about. But it is just a service, which service runs doesn´t really matter. I would just need to import a different config file to each and every container. Be it via mounting a filesystem or fetching it from a local provisioning server via curl upon spawning the container.

Cheers,
Nils