I have my project configured to pick configuration at runtime. These configurations are fixed for the project as I am using the AWS cloud to deploy it. Configuration examples include the redis host, rabbit mq host etc. The values for this is fixed in my application as they are retrieved from a server, example
- redis host = redis.int
- rabbit mq host = rmq.int
Now with this, I have to setup my docker-compose to work like following:
version: '3'
services:
app:
build: .
image: 'my-app'
container_name: my-app
hostname: my-app
ports:
- '8080:8080'
depends_on:
- db
- cache
- queue
env_file: docker/runtime.env
extra_hosts:
- "rmq.int:172.28.1.1"
- "redis.int:172.28.1.2"
networks:
- cs-network
db:
image: 'postgres:10.7-alpine'
container_name: my-app-db
hostname: my-app-db
volumes:
- './docker/pginit/:/docker-entrypoint-initdb.d'
env_file: docker/runtime.env
networks:
- cs-network
queue:
image: 'rabbitmq:3.8.9-management-alpine'
container_name: my-app-queue
networks:
cs-network:
ipv4_address: 172.28.1.1
cache:
image: 'redis:4.0.10-alpine3.8'
container_name: my-app-cache
networks:
cs-network:
ipv4_address: 172.28.1.2
networks:
cs-network:
ipam:
driver: default
config:
- subnet: 172.28.0.0/16
The problem here is that my app loads config from an external server, from there we have a fixed value of the redis host and rabbitmq host, and in order to run this app on my local I have to assign specific ip address to redis and rabbitmq container. Does this looks good from a production perspective.