Docker compose up with remote context

I’m hoping to more fully understand how docker-compose up works on a remote context so that I can debug an issue I have where docker-compose up crashes my jenkins instance without providing any errors. The pipeline I’m using can be summarised:

  • Build locally using docker-compose build
  • Tag and push images to private registry
  • Change docker context to remote host
  • Pull images from registry
  • List all images (successfully shows my images on the remote host)
  • And then deploy using docker-compose up

The issue I’m facing is that when the job reaches the docker-compose up command, jenkins says that is starting 2 of the 3 containers, but then crashes without errors. It makes the jenkins UI unresponsive, and sometimes completely freezes up the server until I reboot it.

My problem could be something separate, but I want to confirm that I’m using the docker-compose up command correctly, and I have raised a few questions:

  • How does docker-compose up use the docker compose file? The docker-compose file isn’t on the remote host, it’s only stored locally, do I need to copy it over or does the docker context handle this?
  • I’ve retagged the images I’ve created in order to push them to my registry, how is docker-compose up referencing the 3 images that I’ve created from the docker compose file. Do I need a second docker compose file that references the new image names in order to run the containers?

Good morning,

your workflow is nearly fine.
Let me describe mine which is working (without using Jenkins - just pure cli-commands) - maybe it helps?

1 - I have a docker-compose.yml on my dev-computer with the build in it

version: '2.0'

services:
  app:
    build: ./build-app
    restart: unless-stopped
[...]

2 - After successful building and testing I tag the resulting image with docker tag <image-name> 192.168.0.12/<projectname-app>:latest and docker tag <image-name> 192.168.0.12/<projectname-app>:20210810 (with a timestamp so I have some versioning).

3 - I push these images to my registry docker push -a 192.168.0.12/<projectname-app>

4 - On the prod-computer I pull the image using docker image pull 192.168.0.12/<projectname-app> and verify that it is there with docker image ls

5 - On the prod-computer I have a docker-compose.yml, too. Without the docker-compose-command does not know what to do :slight_smile: . But instad of having a build: ./build-app it reads image: 192.168.0.12/<projectname-app> Some other things (especially environment-variables or ports to forward to the outside-world) might be different compared to my dev-computer, too.

6 - When I run docker-compose up -d on my prod-computer it (re-)starts the containers using the images available which I have pulled from my private registry in step 4. If it is not working as expected I have to use docker-compose logs or docker logs <containername|containerid> to check what happened.

7 - after successful (re-)start and testing of the containers on my prod-computer I clean up old unused images - either by deleting them one-by-one with docker image rm <imageid> or remove all dangling images with docker image prune

docker-compose is just a wrapper around the docker engine api, and coordinates the single node multic container orchestration.

It works because your local docker-compose process instructs the remote docker engine api to perform the tasks. Uninstall docker-compose on the remote machine and try again: it will still work.

Word of warning: never expose the docker engine port on a host interface with a public ip, unless you enabled the certificate based authentification!

It identifies the image by the value that you assigned to the “image:” key in your docker-compose file. If you publish the image to a private registry you need to include the {server fqdn or ip} in the name.

Whatever you use in docker pull on the remote machine, is what the image: must be specified in the compose file.

Thank you both for your replies, this has really helped my understanding.

@matthiasradde - your approach is really interesting, so rather than using docker contexts, you use pure cli commands and then have a second and separate docker-compose file that references the newly tagged images on the production server. This is really neat, as it means that I can continue to use my existing docker compose file to build on the build server.

@meyay - thanks for your help here, this makes allot of sense. So using docker contexts the docker compose process uses the local docker-compose file and just instructs the remote hosts docker engine api to perform the commands rather than the local one.

I think the bulk of my issue is that my docker-compose file doesn’t reference the newly tagged images. So I either need to use a second docker-compose file on the production server and use a process like Matthias, or I can continue using docker contexts, but build using docker build rather than compose, and also change the single docker-compose file to reference the new images in order to run the containers. (I will take the second option as it’s closer to my current workflow).

Thanks both for your replies.