I am trying to scale my current application sitting in a single host to a multi-host architecture with as minimal effort as possible. i.e. not really want to use Swarm or even Kubernetes as seems overkill
I do not really need replica management provided by Swarm or Kubernetes as I think only one container is enough. The use case is to only spin up several more containers in another new server and at the same time, they can talk to some/all containers (ideally by referring to the container name) in the original server.
My idea is to keep everything simple by keeping everything untouched in the current server A running my app. However, add a new server B in the same network to accommodate a few more containers (those containers can be though as stateless worker containers to consume tasks in a queue in the Redis container within server A).
Currently, server A just simply uses a custom bridge network to connect all containers together in the same host, which works fine. Also, I know that I can use an overlay network to achieve across-node communication between containers.
So my questions are:
First, Can I achieve it without Swarm? or still good to use swarm due to some reasons?
Second, for each container currently in server A, do all of them need to replace the current bridge network with a new overlay network? or only part of them that need to talk with containers in server B without changing the bridge network for other containers? or one container can join two networks (i.e. bridge and overlay)
Swarm already encapsulates all the logic to create an overlay network accross nodes.I have never checked how they actualy do it under the hood. You would need to perform all the tasks by yourself and figure out how to make the docker network logic actualy work with it.
Compared to Kubernetes, Swarm realy is a simple orchestrator in terms of ease of use, but also in terms of features. While Learning swarm is like learning how to ride a bike, learning Kubernetes is more like learning how to fly different types of planes ^^ Though, if you require privileged containers, --device, set ulimits, add additional capabilites or want to assign static ips to your containers: up to Docker 19.03.x, Swarm is not able to do any of this. Oh and swarm takes disposable container seriously: on every service start, the containers get re-created. Thus, you need to make sure the persistant data is stored in volumes and not accidently in the cow-layer of the container.
Though, persistent storage can be challenging, as a declaration for volumes is local to nodes. A swarm service will create the declaration on each node a container requiring a volumes is started the first time. You will either need to use volumes backed by a remote share (nfs/cifs) or a docker volume plugin that serves your needs. But, there is a way arround this: to prevent a container to be started on a random node, you can add node labels to your nodes and use placement constraints in your service declarations to “stick them” to specific nodes. If a container always commes up on the same node, there is no necessity for volumes backed by a remote share.
Pretty much depends on what the containers do. A container can participate in one or more network.