I’m planning to make some scalability tests by running a lot of docker containers (>100) in one machine.
I see that Docker Swarm is supposed to be used to manage containers in clusters. Can it be used in one machine only? And what exactly does it do, is there any advantage to using it instead of just using a shell script for operations on multiple containers?
The point is I don’t have “services” I need to run. I’m pretty much using each container as a router (with specific configurations) to simulate networks and measure scalability of routing protocols.
Is there anything Swarm can help with in this scenario?
@xxradar Is there something specific about swarm manager running on the same machine as the worker nodes that makes you say you wouldn’t recommend it for production?
We currently have docker engine v1.13 and about a dozen and a half or more containers running on a single (fairly substantial metal Ubuntu 14.04 LTS) machine now in production without swarm mode. If we were to enable swarm mode, what changes underneath that it now becomes not recommended for production?
The reason we would consider enabling swarm mode is because we want to take advantage of the “secrets” capability for the containers that is only available in swarm mode, but I am interested to learn what will make it less reliable if we enable swarm mode.
Is there some calculation of additional resources (RAM, etc) that is needed for swam manager on the same machine before considering this?
I’m thinking of enabling swarm mode on a single node as well, just like @patakij because I need “secrets” (otherwise I’m totally happy with docker-compose). If there were a good, easy way to get a few passwords into docker-compose I’d be all over that, but everyone says use swarm mode.
But it’s not clear to me how much work is involved, whether it would cause resource issues, etc.