Git-based deployment & common `docker-compose` vs. `compose.yaml` template

After being a developer for more than 15 years and using many containerized apps, I decided to poke the bear with my own projects and start properly putting them in containers as well. However, after reading about best practices and looking into many open-source & commercial projects… it left me confused.

Two incompatible ways?

It appears there are two ways of providing docker-compose.yaml-based stack and thus two ways of deployment, that have consequences on development:

  1. Pre-built images & template docker-compose

    • Building images
      • Images built by external process and uploaded to some registry (public or private), potentially for different architectures (ARM vs. x86)
      • Build process decoupled from docker-compose
    • Docker-compose
      • Single docker-compose.yaml in the root of the repo
      • References an image and has no build section at all
      • The dev config is seemingly the same as prod one, and serves more as a template for the user to grab & edit per deployment
      • Appears way simpler for the user to understand, but developers then use a completely different process, as the code baked into the production images isn’t useful for development. This creates a situation that running app during development has little to do with actually running it when deployed?
    • Deployment
      • A single docker-compose.yaml is deployed to the server; no code is pulled onto the server
      • Normal down+up process re-initializes the app
  2. Base compose + per-environment overrides

    • Organization
      • Base docker-compose.yaml config with common settings for dev & prod
      • docker-composer.{dev,prod}.yaml that enforce/encourage some changes, e.g.:
        • dev: whole code + runtime configs are mounted, cache & upload data is left in container
        • prod: code & configs baked into the image, upload data forced to be mounted outside/in volume
    • Deployment:
      • git pull & rebuild on production server using compose
      • Code stays in a folder on the production server and is also baked into the image, making the code in a folder a bit of a useless/confusing artifact
    • Such docker-compose setup seems a bit confusing?
      • Assumes that docker-compose YAMLs used in production will generally be 1:1 to the repo
      • You cannot just grab a single file from documentation and deploy
      • Small changes for given deployment possible via docker-compose.override.yaml that isn’t committed

In the wild

Most open source projects appears to follow the 1st approach. Indeed, most of the apps I used provided a big template docker-compose.yaml file that you grab, modify for your needs, and use. However, I saw myself many internal commercial projects using the 2nd approach, to ensure development on local machines is as close to production deployment as possible. Moreover, looking at e.g. one of the biggest PHP framework’s recommendations they follow the 2nd approach, despite encouraging to build OS apps for the community to use :wink:

Am I missing forest for the trees here?

Using Docker for many years, using some open source images, too. Usually the projects provide an .env.example you can adapt.

The compose override I have seen here in the forum like once a year, so it seems barely used.

But it might be used a lot in corporate environments with bigger teams, lots of fine-tuning of many micro-services, where they share the knowledge and use cases among each other, so they don’t need to ask here.

As always, it all depends on the project and the developer. There are multiple ways for the same thing. I used the overide file a lot in the past, because I could change the network, open ports or add labels for a reverse proxy, or even change image for a specific image during development. It is good at the beginning, but it can get complicated later and it is easier to understand a single file than multiple files where you don1t see parts that the override file depends on. And you can override something that you don’t notice, because you find it in the main file. Of course, you can use docker compsoe config to generate a single compose yaml. You can save it or just use it to see what will Docker Compose see in the end, but when you have a really complicated setup, a template system could be a beter choice.

I used to generate composes files from Ansible. I still had multiple files to understand, but at least I could decide where and how to put them and at the end, I had a single file. With Anisble, I didn’t even need to generate a compose file. I could just use Ansible and let its docker module send the request to the Docker API.

.I use dot env file too, but it is not limited to a single name. Any name can be defined in command line if you have a script that calls the docker command and does other things before and/or after it. Of course, if you are generating the entire compose file, you could just generate the environment variables right into the compose file, but sometimes a separate dot env file can make the compose project more readable.

Regarding prebuilt images and mounted source code, depending on the project again, you could have preinstalled dependencies in the image and the development version could be different only in the source code if it is an interpreted language. That is the easy part. Then mounting the dev code is possible and you can override the entire folder in the container. It doesn’t matter if you had a prod source code in it, but you can also have multiple images. One base image without your application, that you can use during development, and one with the application to deploy automatically through a CI/CD pipeline too.

Sometimes you need different dependencies during development, and you need to add those to the image. That could be another image based on the base image like myproject/app-base and myproject/app and myproject/app:dev. That would mean you would have different dependencies for dev and prod, so it is possible that something would work in dev, not not in prod or the other way around, just because of the dependencies, but it is still better than having a single machine and having all the dependencies of all the projects there for all of their stages.

So if having override files works for you, that’s okay. If you feel it starts to be hard to manage, you can think of other ways. Complicate things only as much as you need :slight_smile:

@bluepuma77: the .env[.*] is not the issue here really. I always include .env as a “template” for all variables possible, with additional .env.prod containing some recommendations. The issue is more with how to structure giving users docker-compose stack.


@rimelek Thank you so much for a long and thoughtful answer here!


As always, it all depends on the project and the developer. (…). I used the overide file a lot in the past, because I could change the network, open ports or add labels for a reverse proxy (…)

This has been my experience as well, and thus I am looking reluctantly at using override file in a committed file. So I guess worry wasn’t unfounded here.


(…) but when you have a really complicated setup, a template system could be a beter choice.

While I realize this is an option, and many project (e.g. TrueNAS apps catalog) chosen that way of distribution, I am not sure how it fits the bigger ecosystem. I believe it increases friction more than helps potential users. In other words, when I see a project in e.g. RoR which I have zero experience in, I am more confused by the specific tooling around.


I used to generate composes files from Ansible. (…) I could just use Ansible and let its docker module send the request to the Docker API.

This is a similar solution I used in commercial projects. However, if I understood you correctly, I don’t think this really fits a sharable open-source project? Ideally, I would give users a single docker-compose.yaml file and call it a day. However, it rubs me a bit of a wrong way as the development pipeline would not be using the same compose, as users will be running a production build of the container.


Regarding prebuilt images and mounted source code, depending on the project again, you could have preinstalled dependencies in the image and the development version could be different only in the source code if it is an interpreted language. That is the easy part. Then mounting the dev code is possible and you can override the entire folder in the container. It doesn’t matter if you had a prod source code in it, but you can also have multiple images. One base image without your application, that you can use during development, and one with the application to deploy automatically through a CI/CD pipeline too.

Ok, this will be more of the meat of my response
Initially I was thinking about dumping all the tools into the final image and calling it a day… but this is not how we should do engineering, right? :wink: Hopefully following best practices I came up with this structure for a project that includes web component, CLI, and offers deployment using Alpine Linux for small devices and faster & full-fledged Debian base:


(dashed lines represent COPY while solid ones represent FROM)

I think my understanding of multi-staging best practices here should be correct:

  • php_base contains all runtime dependencies for the application, i.e. pre-compiled PHP engine extensions
  • app_run_prod is a thin shell over the base and as a layer contains pretty much just interpretable source code & precompiled assets
  • app_build is used as a build-throwaway container; it contains all tooling needed to precompile & prebuild app’s components that don’t change
  • app_run_dev contains binds for source code + development tools

This is sort-of pointing me towards the 2nd approach from the 1st post - the repository contains everything needed to build any images. The main compose file contains a default image with an environment variable that can be used during development ot override it. In addition, I am planning to include bake-file to make building easier during development.


I guess the main point here is the build system gets quite complicated, but to be fair I am stuffing 2 different OSes and 3 versions (dev, debug, prod) of the image here. For smaller, CLI-only tools, I could easily reduce the whole thing to php_base_alpine derived straight from php with no build container at all :thinking: