I would like to know if you´ve got some ideas for the following use-case: I am currently evaluating routing algorithms in a testbed with Quagga containers that speak with each other. I would like to spawn around 74.000 containers, each having a Quagga instance running with its own configuration file.
I could easily scale this to around 500 containers, using Docker Compose on a local VM. However, when I try to spawn more instances the VM/server freezes. I do not think that this is due to resource-constraints. It has rather something to do with OS constraints as every container creates a network interface.
That´s why I was wondering about Docker Swarm or Kubernetes. Since it is probably easier to manage and to deploy, I´d like to go with Docker swarm first. My idea would be to spawn 148 VMs, each connecting to the Docker Swarm manager. Next, I would like to deploy the same Quagga container image to all nodes, 500 containers per node. The server I am using has 112 cores and 780GB RAM available. If this should not be sufficient, I am also able to obtain more resources.
One problem I am facing is that each Quagga instance would need to be started with a different config file. This has not been a problem so far, since I could simply change the config file entry for each container in the docker compose file. However, as I understand Docker swarm, I would have to provide one image which gets cloned to the different nodes and I am unsure how to go about the config file. How could I deploy the same container but with a different config file for each container?
There might be other scaling problems. Are there any you are anticipating?
I have already read that the docker swarm virtual switch only supports 512 entries, I would have to increase that limit. Are there more such obstacles?
Thanks in advance!