I’m using docker and docker swarm to deploy a set of services. They are working perfectly, and I’m very happy with the set up on my tests, still not in production. However, I’m not that happy with the way I’m doing the deployment, because I feel there may be a smarter way.
Right now, I have a docker-compose.yml file with 3 declared services:
- 4 celery workers
I’m deploying this stack using a CI/CD (gitlab) on every commit right now, using
- docker stack deploy -c docker-compose.yml $STACK_NAME
The only image that actually changes (the worker code change) is the celery-worker one.
So, my question is, is this the best way to deploy an stack?
Did you ever find a solution to this? I’m encountering the same issue. Using
docker stack deploy -c docker-compose.yml $STACK_NAME updates all the services in my stack, even though only one image is ever really changed.
I considered breaking each service into it’s own separate stack file and building when a change is detected for a service using some git history, but there must be a simpler way.
Actually, kind of. I found that Swarm only update the image that has been modified, not everything, you can check this looking at the creation times.
Right now I have this setup working without any problem, so I would say that I’m happy enough and it seems to be the way to do it.
Are you pushing your images to a remote repo prior to running
docker-stack deploy on your swarm? If so, are you only pushing individual images if a change is detected? Currently, I a have a script that runs in Travis which determines if an individual service has changed (using --git diff). If a change was detected, the service is then built, pushed to DockerHub, and then deployed on the swarm.
How does the swarm detect if an image has been modified?
Sorry for he delay, I just saw this. Swarm does it automatically. It detects that the image changed and deploy it. I assume it does it using the Hash (md5/sha1).