I was wondering what are some of the trade-offs involved in designing your containers so that:
Option A) One container is comprised of a redis cache, a logging service and also a web service all of which are communicating with each other within the container, and there are multiple instances of such designed containers for different purposes all independent of each other but similar in design.
Option B) Each one of the services stated in Option A are separate containers, this would increase the total number of container by three fold assuming originally one container hosted three services. This way still each web api gets its own redis cache and logging service in separate containers communicating over network and still remain independent of other containers except these two.
Option C) Design the system so that there is only one redis cache and one logging service globally available for each web api and host them in separate containers, reducing the number of containers although increasing traffic and sacrificing a bit of security.
We thought of designs in A and B in order to avoid single points of failure throughout the system and segregate data irrelevant to each other by caching them in different redis instances which is an important security concern for us. However this would somewhat be a rather complex approach compared to option C, which is more straightforward.
We intend to deploy the system on AWS, please state your opinions regarding how each design would affect the overall system in terms of costs, security, scalability, management and other issues you find important pointing out.