Docker Community Forums

Share and learn in the Docker community.

Docker container design tradeoffs

I was wondering what are some of the trade-offs involved in designing your containers so that:

Option A) One container is comprised of a redis cache, a logging service and also a web service all of which are communicating with each other within the container, and there are multiple instances of such designed containers for different purposes all independent of each other but similar in design.

Option B) Each one of the services stated in Option A are separate containers, this would increase the total number of container by three fold assuming originally one container hosted three services. This way still each web api gets its own redis cache and logging service in separate containers communicating over network and still remain independent of other containers except these two.

Option C) Design the system so that there is only one redis cache and one logging service globally available for each web api and host them in separate containers, reducing the number of containers although increasing traffic and sacrificing a bit of security.

We thought of designs in A and B in order to avoid single points of failure throughout the system and segregate data irrelevant to each other by caching them in different redis instances which is an important security concern for us. However this would somewhat be a rather complex approach compared to option C, which is more straightforward.

We intend to deploy the system on AWS, please state your opinions regarding how each design would affect the overall system in terms of costs, security, scalability, management and other issues you find important pointing out.

option a) does not allow specific resource constraints per service, which is something you will want to use for a reliable environment.

option b) would be fine if the appliacations are completely independend.

option c) is the way to go, if you add replicas to redis and your logging service.

Further thoughs:
If security is an issue you could think about running each app stack in a separate cluster with smaller instance types or in ECS or EKS. You could also think about using ElasticCache for Redis Instead of running your own redis. Instead of using your own logging service, you could directly send your logs to CloudWatch. Or create your own centralized logging with one of the known log management systems like ELK, Graylog, splunk, or if you want a more lightweight solution, you might want to take a look at Grafana Loki. You will want to have system monitoring with Prometheus/Grafa or something similar as well.