After being a developer for more than 15 years and using many containerized apps, I decided to poke the bear with my own projects and start properly putting them in containers as well. However, after reading about best practices and looking into many open-source & commercial projects… it left me confused.
Two incompatible ways?
It appears there are two ways of providing docker-compose.yaml
-based stack and thus two ways of deployment, that have consequences on development:
-
Pre-built images & template
docker-compose
- Building images
- Images built by external process and uploaded to some registry (public or private), potentially for different architectures (ARM vs. x86)
- Build process decoupled from
docker-compose
- Docker-compose
- Single
docker-compose.yaml
in the root of the repo - References an image and has no
build
section at all - The
dev
config is seemingly the same asprod
one, and serves more as a template for the user to grab & edit per deployment - Appears way simpler for the user to understand, but developers then use a completely different process, as the code baked into the production images isn’t useful for development. This creates a situation that running app during development has little to do with actually running it when deployed?
- Single
- Deployment
- A single
docker-compose.yaml
is deployed to the server; no code is pulled onto the server - Normal down+up process re-initializes the app
- A single
- Building images
-
Base compose + per-environment overrides
- Organization
- Base
docker-compose.yaml
config with common settings for dev & prod docker-composer.{dev,prod}.yaml
that enforce/encourage some changes, e.g.:dev
: whole code + runtime configs are mounted, cache & upload data is left in containerprod
: code & configs baked into the image, upload data forced to be mounted outside/in volume
- Base
- Deployment:
git pull
& rebuild on production server using compose- Code stays in a folder on the production server and is also baked into the image, making the code in a folder a bit of a useless/confusing artifact
- Such
docker-compose
setup seems a bit confusing?- Assumes that
docker-compose
YAMLs used in production will generally be 1:1 to the repo - You cannot just grab a single file from documentation and deploy
- Small changes for given deployment possible via
docker-compose.override.yaml
that isn’t committed
- Assumes that
- Organization
In the wild
Most open source projects appears to follow the 1st approach. Indeed, most of the apps I used provided a big template docker-compose.yaml
file that you grab, modify for your needs, and use. However, I saw myself many internal commercial projects using the 2nd approach, to ensure development on local machines is as close to production deployment as possible. Moreover, looking at e.g. one of the biggest PHP framework’s recommendations they follow the 2nd approach, despite encouraging to build OS apps for the community to use
Am I missing forest for the trees here?