I’m going to assume you want a self-administered setup in this and not RDS or some other AWS service.
There are 2 methods of deploying this: Single Docker Host and Swarm. I’ll quickly talk about each, however in either case, you don’t need to maintain two separate compose files…this can all be done in a single compose file if it’s easier.
Single Host:
If you have 2 app services and 1 database service, you want 3 different docker containers. You basically tell each of the two apps about the database container in their deployment yml files (the section you’re looking for is “link”).
Potentially the easiest way of handling this on a standalone host deployment is by providing the database the ability to talk externally to the docker container by using either -p (specify port) or -P (provide a random port) to bind the container to an external host port…allowing the service to be seen outside your docker host/swarm. (See the Docker Run documentation on -p and -P for more information).
By doing this, your apps can then access the database at “dockerhost:port” instead of having to do any inter-container linkages…this has the second benefit of supporting anything trying to connect to the database that ISN’T Dockerized. That said, if it’s important than only your two apps can access the database and nothing else, you can look at the docker run documentation on --link…that allows an app in one container to see an app in another container without exposing the apps to the outside world.
Swarm:
If instead you’re trying to make this work as a swarm it becomes very easy. As a suggestion says above create an Overlay Network that all three will live on (database, app1 and app2) and attach each of your containers to that overlay network…or if it’s important that app1 not be able to see app2 but both see the same database then create 2 overlay networks (say net1 and net2) then have the database app connect to BOTH networks but have app1 and app2 only connect to their specific network.
Again, you can choose to publish the database’s port and have the apps in the containers talk with the database through an external-to-swarm connection instead of through an overlay network, or you can have the best of both worlds and do both (assuming you have both in swarm and out of swarm apps that need to connect to the same database). Remember, however, that least privilege is a great concept, especially when re-architecting an application through docker containers/micro-services and it’s VERY easy to change things in the future…so don’t open a port “just in case”.
With swarm’s mesh routing you don’t have to worry about whether the apps are running on the same or different hosts, everything on the same overlay network will see each other regardless of what machine it’s running on. As well, in swarm, if you do end up publishing a port you can then get to that port from any manager node’s host information on the swarm (google swarm mesh routing for more information).
This was intended to be a bit more “high level” but to give you some ideas of how deployment might work for you based upon what solution you’re dealing with.
Using AWS’s services is always a good choice if you’re deploying your app to AWS, but in the case where you’re trying to do internal cloud, hybrid cloud, or if you’re in the process of trying to create an app internally that may need to migrate back and forth, there are reasons not to use vendor supplied services.