Advice on how to manage a network of and automated building

First of all I’m sorry if there would’ve been a better place to post this.

I’m wondering if the system we are currently planning on using is “good”, or if better methods can be used. Let me first explain the situation as I’d like it to be.

Each “application” consists of a group of dockers, with a single nginx docker with an exposed port. This nginx acts as a reverse proxy to forward data, inside a network.

<nginx-reverse-proxy>  -- <static-data-server (nginx)>
                       -- <dynamic server 1 (/api/shop)>
                       -- <dynamic server 2 (/api/whatever)>

One of the important things for us is that we can easily add “modules”, as in extra dynamic/static servers need to be added without a lot of modifications.

Currently the reverse proxy server reads an environment variable field, which “notifies” a launcher script (bash script), the launcher script then modifies sites-enabled, to create the reverse proxy locations before starting nginx.

The problem is in the dynamic servers, right now we manually add and launch these after rolling out of the tests. Using docker build followed by a docker run. This is error prone and quite slow (considering for each update we have to manually stop the docker, rebuild it and run it). The next step is to automate the building using a python script. However I am wondering how and what should be done there.

We could start with a docker compose file. However this would initially just be the network, a python script would update the docker compose as more dockerfiles are added. Now this leaves one big point: whenever a service module is updated I wish to reload that module’s code only, without interfering on stopping the other modules.

I understand that docker-compose up is supposed to do this. But can this commando pick up only modifications to the folder? IE:

# Dockerfile
FROM alpine
COPY ./source/build/ /source/

# docker-compose
version: "3.7"
      context: .
      dockerfile: Dockerfile

Would in above situation a change in the build directory be picked up by docker-compose up. Or if it doesn’t, how do I tell that a specific service has an updated source directory?

I always feel that mixing of build and run configurations in a docker-compose.yml is dirty. You docker-compose.yml should be as close to the production environment as it gets…

Instead of building them implicitly in your docker-compose.yml, I would advice to build them in a seperate step and push them to an image repository. If your tags are immutable images, update your docker-compose.yml to use the newly created tag for the image and execute docker-compose up -d - only service with configuration changes (changing the image tag is such a change) will be updated. If your tags are mutable, you might want to add a docker-compose pull before you execute docker-compose up -d

Instead of using nginx as reverse proxy you might want to take a look at Traefik. It allows to use labels on the target service that will be used by Traefik to update the reverse proxy rules.

One more thing: a runtime instance of an image is a docker container or just a container, but it is not a “docker”.

Well can an image repository be kept private? As in, on our own servers? Instead of using the docker repository service?

There are many open source solutions for self hosted private image repositories available: Harbor, JFrog Container Registry, Portus, sonatype Nexus3 has a build in Repo, Gitlab has a build in Repo and there is the official Docker registry

All these solutions can be run as Docker containers.

If you already have Nexus3 or Gitlab in your environment, just enable, configure and use their buildin image repo. Otherwise I would recommend to either use Harbor or JFrog Container Registry - both support vulnerability scanning of your images and have build in access control. If access control is enough: Nexus3 is easy to deploy and configure.

If you do not even need access control: Docker registry is a bare naked repo. Everyone able to access the registry will be able to pull and push images. You would need to add another service for access control…