How to build and manage a Swarm locally

Hi guys,

I’m using Docker Compose and Docker Desktop to test some services deployed on a single replica.

For some services that use port binding, I can’t spin up replicas cause obviously it causes a port conflict… docker desktop is a single instance.

For this reason, I tried Multipass

I wonder if I can build my image and execute my Compose file from my local machine but still spawn the nodes with Multipass, plus, keep Docker Desktop for other tests that can be run on a single instance?

Currently, I’m able to run an instance using a cloud-init file that installs Docker Engine and Docker Compose.

multipass launch -c 2 -m 2G -d 10G -n multipass-docker 22.04 --cloud-init multipass-docker.yml


# multipass-docker.yml
  - name: boulard # system user
      - ssh-ed25519 AAAAC3N... boulard
package_update: true
  - docker
  - avahi-daemon
  - apt-transport-https
  - ca-certificates
  - curl
  - gnupg
  - lsb-release
  - sudo curl -fsSL | sudo bash
  - sudo systemctl enable docker # initialize docker at startup
  - sudo systemctl enable -s HUP ssh
  - sudo groupadd docker
  - sudo usermod -aG docker ubuntu

I can connect to it through SSH. I also created an additional docker context to be able to switch between “localhost” building and running vs a “multipass swarm” execution context.

docker context create multipass --description "Multipass Docker Desktop" --docker "host=ssh://ubuntu@multipass-docker.local"

Now I don’t really know how to proceed. I know that docker image builds images on a local registry, so maybe I should set up a registry in the first multipass instance…? And I don’t know how Docker Compose would work with the multipass context…

Thanks for your help

Contrary to what you may think, the problem is not that Docker Desktop is a single instance. What is true for docker compose deployments with replicas, does not apply to swarm deployments (docker stack deploy) By default, Swarm uses the ingress mesh to publish ports, which binds a single host port and forward incoming traffic to the virtual ip of the service, which then loadbalances traffic to the service tasks (=replicas).

Of course, you can create a single node swarm cluster in Docker Desktop, and can run as many replicas as you like.

Though, if you bypass the ingress mode and use host mode to publish the port, then yes: there can only be one per node. that’s why people usually use it in combination with a global service (=one container on each node), and not with replicated services.

When it comes to image handling: you can not build container images on one host and use them on another host without effort. Either you run a container image registry and push the images to repositories in the registry, and pull them from the registry on the other host, or you save the images (export to a tar archive) on one host, copy them to the target and load the images (import from tar archive to local image cache), before you create a container based on it.

Thanks for your reply, that’s interesting. I don’t use the host mode. I ignored that I could create a Swarm and deploy stacks with Docker Desktop. So first, I’ll try to run several tasks of my service in Swarm mode, then I’ll try the same using 2 multipass VMs just to understand how it works.

For the multipass option, I think I can:

  • create a manager node and a worker node
  • install docker-engine on the manager and worker nodes
  • install compose only on the manager node
  • create a registry on the manager node
  • mount a local volume containing the compose.yml file to the manager node and spawn the instances from there

Just to be sure: compose does not deploy swarm deployments. docker stack deploy does.

Shouldn’t there be a step to actually create the image and push it to the registry? Docker swarm deployments do not build images, they expect them to be available in the local build cache or to be pulled from a registry.

I don’t know why, I thought the docker-compose-plugin package was including the docker stack deploy tool
but actually, I guess it’s just part of the CLI / APIs part of the docker package.

I guess I can still use Compose from Docker Desktop only for testing.

Yes, It’s definitely missing the image creation and push to the registry.

  • Create a manager node and a worker node
  • Install docker-engine on the manager and worker nodes
  • Create a registry on the manager node
  • Mount a local volume containing the compose.yml file to the manager node
  • Create the images from the Dockerfiles and push them to the Registry (on the manager node)
  • Deploy the app to the multipass swarm (docker stack deploy )

I think I just need to make the multipass context points to the manager node and I think it could work.

docker context update --docker "host=tcp://managerip:2376,ca=~/ca-file,cert=~/cert-file,key=~/key-file"  multipass-context

I can already feel it’s gonna be fun :sweat_smile: but now, thanks to your help, it seems like I have a plan :+1:

Sure. And you can still continue to use it on a node with swarm mode enabled, the same way, like you can still start containers using docker run. But it will always perform compose deployments.

The approach looks good to me.

Though, If you want to control the swarm from your host, you should create an additional context. You don’t need to use the certificate based authentication, unless you plan to expose the swarm port over the internet.

Maybe just a note on keeping things simple.

We run multiple services with multiple instances on Docker Swarm. We don’t expose ports of the regular services, that’s not needed. They are all attached to Docker networks, so (for other Docker services) they are reachable via their internal container IP and port, not conflicts there.

If we need to expose a service externally, we use Traefik as reverse proxy, which will auto-detect services and forward requests to the targets internal IP and port (simple Traefik example, Swarm example).

1 Like

The context of my initial question is that I’m trying to build a chat application. Today it’s a Go web socket server that listens to port 8090. I’m trying to figure out how I could scale it out.

I understand those app nodes can still listen to the same 8090 port but on different VMs using Multipass. But I also learned that I can also create a swarm locally so I need to test both options.

I need to ensure the load balancer forwards the connection to the same node based on the IP address of the requester. I heard about an L3 load balancer, but Traefik, which is an L7 one (correct me if I’m wrong) can do the same job as well. I just need to dig into these 2 options as well.

Lastly, I will need to find a way to broadcast a message across the nodes in case the recipients are managed by different nodes in the swarm. Here, I’m thinking about storing the client ID/node pairs in memory either in the load balancer or in another dedicated node. This node would expose a service that would be requested by the other nodes when they receive a message like “Hey, send this message to the members of this chatroom ID”. Then this one will request to get the client IDs from a DB and get the node list to contact from its memory…we’ll see how it goes :sweat_smile:

Maybe you should read some of the tech stories about WhatsApp, how they scaled their software and infra with few resources.

1 Like

Interesting @bluepuma77. I never replicated a service actually, I only replicated clients. All of my services listens to a host and a port. In a side project, the services are called by a custom go gateway For now, I only have a service registry consisting of a simple config file, but I wonder how I will manage this by scaling out some services.I definitely can’t update the configuration manually each time a service scales out. I understand that Traefik is a reverse proxy that is able to forward request to the relevant service. I’ve just read that it can makes call to a service registry like consul but I don’t know much more. To me, a service registry looks like a service to which other services register their host/port, and that exposes an endpoint that returns a list of host/port for a given service name. Question is: how does a scaled out service gets its own hostname | ip from docker in the context of a swarm? It’s probably too long to explain but if you have some good links to read I’ll take them :D. Many thanks

User defined networks already provide a built-in dns-based service discovery. The service can either use endpoint_mode: vip (which is the default) to make the dns entry resolve to the service vip, or endpoint_mode: dnsrr to resolve it to a multivalued response with the ips of each service task (=container), see: Compose file version 3 reference | Docker Docs. Even with endpoint_mode: vip, the multivalued response of the container ips can be queried using tasks.{servicename}.

If this doesn’t help in your situation, then you could use template placeholders with hostname to make the task slot (~=the replica id) part of the hostname:

Template placeholders can also be used in environments or volumes.

1 Like