I have several applications, deployed to separate Docker repositories, that share a lot of very similar configuration (nginx, rsyslog, etc). So similar, in fact, that I have a single Git repository containing the Dockerfile and config files shared by these applications, with deployment-specific metadata represented as template variables like
Currently, my build process goes something like this:
- Build and test app code
- Clone docker-build repo, which contains the Dockerfile and other configs
- Replace template variables in config files with deployment-specific values
- Copy build artifacts into docker-build repo
- Build and push the Docker image
I’ve noticed that most projects tend to keep the Dockerfile in the same Git repository as the application itself. I like this approach, but there would be too much duplication between applications. If ever I needed to make a change to the Dockerfile, nginx config, etc. I’d need to do so in every single application’s repository, which is not only tedious but highly error-prone.
Is there a best practice for sharing a Dockerfile and other configs, which may have certain application- or deployment-specific variables, between multiple applications? Is my current approach, i.e. templatization, the way to go?