Docker-compose same config for dev and production but enable code sharing between host and container only in development

I’ve asked the same question on SO, but I feel that people from this forum are more concentrate on this topic. Here is the SO link, (link) and I’m coying the question here.

As the most important benefit of using docker is to keep dev and prod env to be the same so let’s rule out the option of using two different docker-compose.yml

Let’s say we have a Django application, and we use gunicorn to serve for production and we have a dedicated apache2 as a reverse proxy(this apache2 is out of docker by design). So this application(docker-compose) has only two parts, web (Django) and db (mysql). There’s nothing wrong with the db part.

For the Django part, the dev routine without docker would be using venv and python3 manage.py runserver or whatever shortcut that an IDE provides. We can happily change our code, the dev server is smart to pick up and change and reflect in no time.

Things get tricky when docker comes in since all source code should be packed into the image, this gives our dev a big overhead of recreating the image&container again and again. One might have the following solutions(which I found not elegant):

  • In docker-compose.yml use volume to mount source code folder into the container, so that all changes in the host source code folder will automatically reflect in the container, then gunicorn will pick up the change and reflect. — This does remove most of the recreating container overhead, but we can’t use the same docker-compose.yml in production as this introduces a dependency to the source code on the host server.

  • I know there is a command line option to mount a host folder to the container, but to my knowledge, this option only exists in docker run not docker-compose . So using a different command to bring the service up in different env is another dead end. ( I am not 100% sure about this as I’m still quite new to docker, please correct me if I’m wrong )

TLDR; How can I set up my env so that

  1. I use only one single docker-compose.yml for both dev and prod
  2. I’m able to dev with live changes easily without recreating docker container

Thanks a lot!

The thing is: docker-compose does not cover control flows.Things would be easy if it would :wink:

But there might be another way to at least soften the pain:
You can define a productive docker-compose.yml and a docker-compose.dev.yml then use
docker-compose -f docker-compose.yml -f docker-compose.dev.yml to override values in the .dev.yml.

see: https://docs.docker.com/compose/extends/

Btw. a docker-compose.yml v2.x is usualy translatable in docker run cmd and vice versa.
Though, don’t go the docker run-route. Your dev team should use the exact or at least a very close approach to would your operations team will use later.

2 Likes

Ok… this syntax is not supported in version 3… I guess I have to use slightly different docker-compose file in two envs… Thanks!

Using the docker-compose.yml (base for all) and then the override functionalities works quite nicely. If you then use scripts that takes in consideration where they are run, it is really nice for small to medium sized deploys.

Imho, the dev-environment is an “unmanaged” environment and there you do not use an image. Whenever you want to deploy something to a “managed” environment, like staging, qa or prod, you should build that image in the CI environment, tag it with an unique tag and have that tag in the docker-compose..yml. It is easy to write a script that creates this files based on the status of the CI build.