Managing environments for applications in docker containers

Hi all,

I am just starting to investigate Docker options and one question that I was not able to answer is how to control the application in different environments.
For example, in my application environment drives log levels, mapped routing S3 bucket name, etc…
At this point, this all managed by a deployment script that modifies Nginx config to execute different index file.
What are the best practised to manages such thing with Docker and in particular on AWS?
Note: AWS Elastic Beanstock expects that the image will be fully built to be deployable. Of coarse, it is possible to build the app for production, but how do I test that built before the deployment?

I found this topic here: Environment-dependent commands in Dockerfile?, but it was not answered.

Thanks,

Hi,

In a case where you have to build different docker images for different environments you should consider maintain different Dockerfile for each environment. As of now Dockerfile doesnt have the features to conditionally execute tasks inside Dockerfile.

You can also make use of the Docker env variables to pass variables to containers at the time of starting the container to pass per environment based information to containers.

Another option is to run any of the configuration management tools inside container for configuring your container.

These links will give some inputs.

https://puppetlabs.com/webinars/puppet-docker-using-containers-configuration-management
https://bildung.xarif.de/xwiki/bin/Articles/The+Marriage+of+Ansible+and+Docker

Regards

1 Like

Thank for the reply. It helps.
However, I was hoping to get read on ansible (it is exactly what we use today). The reason for it is a speed of deployment.
Docker image gets deployed in under a second where ansible playbook takes quite some time to run.
I guess the best way to go is fully rely on environment variables.