Handling Datasources the "Right Way"

Hi All, I’m fairly new to docker but would like to start containerizing some of my organization’s web applications. However, I am unsure how I could handle datasources the “right way”; Normally, our servers have datasources configured on the server before the web application is deployed, but with Docker that configuration would need to be available as the container is built. We could store the datasource config in the project’s repo, but 1) that would require configs for multiple environments (dev and prod), and 2) developers which should not have access to prod datasources would be able to see the config files in the repo.

My immediate idea is to store a generic, local config file on each server containing the datasource information, but I am curious if there is a better or more standard way to configure datasources.

Thanks,
Jared Leonard

The simplest way would be to use volumes to mount the configurations during time.

Though, a more elegant solution would be to create an entrypoint script for your images that updates the existing configuration or generates a configuration based on environement values when the container is started.

Environment specific configuration or sensitive data like credentials or ssh keys do not belong in an image. Everyone who can pull the image could access the crendials or the ssh keys.