Hello, new to docker and after many weeks of reading and trying I am still confused and struggling to understand the best way to deploy my application. I hope to get some light here.
I would like to use swarm over two physical hosts. I would like to have a setup where I can share the database (manager) with multiple web hosts (workers) so I can have multiple hosts around the world and simplify my database backup and maintenance.
HOST 1 is my manager
It runs my database (mariaDB) with a simple mariadb:latest
I use docker compose to launch
HOST 2 is a worker
It runs two containers: my web application and nginx.
Both containers are custom Dockerfile and are named mr/app:prod and mr/nginx:prod
I use docker compose to launch them
From here I have two main questions on the best way to do things
I use docker compose for configuration as it is ideal with a .env file for my configuration but it seems I have to convert it to docker stack ? can swarm use docker-compose files ?
I understand my manager is supposed to create services on my worker
so I do docker service create --name mr mr/app:prod
But mr/app:prod is not found… So from my research I understand I have to create a registry but all the tutorials I found are using a third host… I tried that for days but could not make it work and in the end I am thinking this is over complicated…
I am building my images locally; I must be missing something; is there a simple way to tell my worker to get its image locally ? And if there is no way to do this can I run the registry on my manager ? But that means I have to now upload my worker Dockerfile to my manager and build images on my manager ?
It can! But you still would want to migrate your compose file to use at least the deploy: element, to configure your deployment accordingly, before you deploy it with docker stack deploy -c docker-compose.yml {stackname} (of course you need to replace {stackname} with the correct stack name).
In recent version docker-compose and docker compose will pick up the new settings as well. Old docker-compose versions warned about the elements would be ignored, and still deployed the compose file, but ignored those elements.
Note: Swarm mode deployments do not support the build:element.
With multi-node deployments you will want to have a registry, as it is needed to distribute the images amongst the nodes. There is no need to use a third host, you can run it as container. If you run Gitlab, Nexus or Artifactory in your environment than you can use their container registry implementation - or checkout Artifactory Container Registry or Harbor. You could also use docker save to save the image, copy it to the other node and use docker load to import it again -but you will realize quickly that this approach is everything but comfortable - this approach is not recommended for this use case.
With a registry you can still use docker build or docker-compose build but you have to prefix the tag with the fqdn part. So if your image would normaly be created with tag like this my_group/my_awesome_repo:my_awesome_tag, it would change to fqdn/my_group/my_awesome_repo:my_awesome_tag or fqdn:port/my_group/my_awesome_repo:my_awesome_tag. the fqdn could be the hostname, a full qualified domain name or an ip. The port is required if your registry does not listen on port 443.
Anothing thing to consider is that volumes are node local, so if you have a service that can start a container or either one of your nodes, you need to use a named volume that points to a shared storage (like a nfs4 remote share) to be able to access the data from either one of the nodes.
Thank you for the clarifications. There is a lot to digest for me here but I’ll work on what you suggested. Im still convinced the registry is an unnecessary complication for most cases. I’ll drop a line in the feature request section.
Sorry but I still have a question about private registries workflow.
Are devs usually generating their own certs and using registries as “insecure registries” (this is what I tried) or buying a domain name just for this and using real cert like letsencrypt ? The first solution does not seem production ready to me and the second one kinda over the top if you are not a big business.
I dont understand where this registry is supposed to be hosted and managed with the certs in the workflow.
In my example am I supposed to host it on my manager or my worker ? none of these options feel right to me or am I missing something ?
In larger companies devs leave it to their ops team to deploy and operate a registry with valid certificates. For smaller teams I would definitly recommend to subscribe to Dockerhub and use private repositores instead of operating your own.
I feel you overcomplicate things in your head. It realy doesn’t matter where the registry is operated, as long as it’s reachable by network and has certifcates either issued by a known ca (like letsencrypt) or is selfsigned (which forces you to create your own ca and include it on all hosts that need to know it… this can became cumbersom). So make your pick: as container in your swarm cluster on either master or worker, or as native installation on either one of those node or any other host that is reachable by network. There is no coupling of any sorts. The engine only interacts with the registry when it pulls/pushes an image.
If you want to run it in your own cluster: get yourself a domain (can be bough for less than <10$ annualy), register dns names that point to the public ip where at least one of your nodes is reachable, run a reverse proxy like traefik that takes care of createing the letsencrypt certificates and forwarding the traffic based on domain name to the target container. And of course you need the target container.
Note: you need to make sure the dns provider is supported by the letsencrypt dns-challenge to issue wildcard certifcates.