Single db instance with multiple apps

I have 2 applications that are separate codebases, and they each have their own database on the same db server instance.

I am trying to replicate this in docker, locally on my laptop. I want to be able to have both apps use the same database instance.

I would like

  • both apps to start in docker at the same time
  • both apps to be able to access the database on localhost
  • the database data is persisted
  • be able to view the data in the database using an IDE on localhost

So each of my apps has its own dockerfile and docker-compose file.
On app1, I start the docker instance of the app which is tied to the database. It all starts fine.

When I try to start app2, I get the following error:

ERROR: for app2_mssql_1 Cannot start service mssql: driver failed programming external connectivity on endpoint app2_mssql_1 (12d550c8f032ccdbe67e02445a0b87bff2b2306d03da1d14ad5369472a200620): Bind for 0.0.0.0:1433 failed: port is already allocated

How can i have them both running at the same time? BOTH apps need to be able to access each others database tables!

Here is the docker-compose.yml files

app1:

version: "3"
services:
  web:
    build:
      context: .
      args:
    volumes:
      - .:/app
    ports:
      - "3000:3000"
    depends_on:
      - mssql
  mssql:
      image: 'microsoft/mssql-server-linux'
      ports:
          - '1433:1433'
      environment:
          - ACCEPT_EULA=Y
          - SA_PASSWORD=SqlServer1234!
      volumes:
          - app1_docker_db:/var/lib/mssql/data
volumes:
  app1_docker_db:

and here is app2:

version: "3"
services:
  web:
    build:
      context: .
      args:
    volumes:
      - .:/app
    ports:
      - "3000:3000"
    depends_on:
      - mssql
  mssql:
      image: 'microsoft/mssql-server-linux'
      ports:
          - '1433:1433'
      environment:
          - ACCEPT_EULA=Y
          - SA_PASSWORD=SqlServer1234!
      volumes:
          - app2_docker_db:/var/lib/mssql/data
volumes:
  app2_docker_db:

Should I be using the same volume in each docker-compose file? I guess the problem is in each app i am spinning up 2 different db instances, when in reality I guess i just want one, and it be used by all my apps?

Start the database (I’ll claim MySQL, but this will work for anything) on the host. You can start it in a Docker container, if you’d prefer:

docker run -p 3306:3306 --name mysql -d mysql:5.6

This is not part of either application (it’s shared) so it doesn’t go in either application’s docker-compose.yml file.

Note your host’s IP address (on native Linux often 172.17.0.1 will work; on Docker Toolbox VMs often 192.168.99.1 will work), then configure the individual containers to talk to the database on that IP address, on port 3306, with whatever credentials you’ve configured, in the same way you’d configure it to talk to the external database.

You will never be able to access the database as “localhost” (and that’s normal).

Thanks for answering.

So this makes me think about the point of all this. So I should keep my database outside of my apps.

I guess this makes sense for local development. Developers can spin up the db and have it persisted on their workstation.

But how about deployment? If I were to deploy the apps to the cloud, using docker containers, how would the database work?

Would it be separate? What’s the standard or best approach?

Thanks

yes, it is better to separate the database from the apps. One option is to create the apps and database to the same overlay network. You could create a new overlay network or use docker ingress overlay network. for example,

  1. create an overlay network, docker network create -d overlay mynet.

  2. create the database service, docker service create --name mydb --network mynet --replicas 1 dbimage.

  3. create the app service, docker service create --name app1 --network mynet --replicas 1 myapp1. myapp1 could talk with db by the dns name mydb.

For DB, you would need to consider where data is stored, as container could move from one node to another. Could refer to question How does Docker Swarm handle database (PostgreSQL) replication?

Thanks.

I guess in still confused about deployment. If all apps and databases are deployed to different servers, or to AWS, will this work?

yes, swarm overlay network works across servers. If 2 containers attach to the same overlay network, swarm overlay network makes sure one container could access the db container on different servers.

For DB, you need to restrict the container to one node if DB stores data on the node’s local directory. Or you could use the volume plugin to store data to EBS, this works for one container.

If you want to set up a cluster DB on AWS, Firecamp would be a good option. DB member will have a unique name, such as db-0.cluster-firecamp.com, db-1.cluster-firecamp.com, no matter which node DB container runs. The apps could simply access db via this name.

If your app exposes the service to the external internet, you should consider to run DB on EC2 that has only the internal IP for better security.

If you have that option, you should consider just using Amazon RDS for the database. It is slightly expensive compared to running a relational database on an EC2 instance yourself, but at the same time, you never have to ask questions like “how do I make a backup of a Docker volume”, “when I make the backups, where do I put them”, “how do I schedule backups”, “how do I keep Swarm from moving my database”, and so on.

yes, AWS RDS is definitely a good choice :slight_smile:

I’m going to assume you want a self-administered setup in this and not RDS or some other AWS service.

There are 2 methods of deploying this: Single Docker Host and Swarm. I’ll quickly talk about each, however in either case, you don’t need to maintain two separate compose files…this can all be done in a single compose file if it’s easier.

Single Host:

If you have 2 app services and 1 database service, you want 3 different docker containers. You basically tell each of the two apps about the database container in their deployment yml files (the section you’re looking for is “link”).

Potentially the easiest way of handling this on a standalone host deployment is by providing the database the ability to talk externally to the docker container by using either -p (specify port) or -P (provide a random port) to bind the container to an external host port…allowing the service to be seen outside your docker host/swarm. (See the Docker Run documentation on -p and -P for more information).

By doing this, your apps can then access the database at “dockerhost:port” instead of having to do any inter-container linkages…this has the second benefit of supporting anything trying to connect to the database that ISN’T Dockerized. That said, if it’s important than only your two apps can access the database and nothing else, you can look at the docker run documentation on --link…that allows an app in one container to see an app in another container without exposing the apps to the outside world.

Swarm:

If instead you’re trying to make this work as a swarm it becomes very easy. As a suggestion says above create an Overlay Network that all three will live on (database, app1 and app2) and attach each of your containers to that overlay network…or if it’s important that app1 not be able to see app2 but both see the same database then create 2 overlay networks (say net1 and net2) then have the database app connect to BOTH networks but have app1 and app2 only connect to their specific network.

Again, you can choose to publish the database’s port and have the apps in the containers talk with the database through an external-to-swarm connection instead of through an overlay network, or you can have the best of both worlds and do both (assuming you have both in swarm and out of swarm apps that need to connect to the same database). Remember, however, that least privilege is a great concept, especially when re-architecting an application through docker containers/micro-services and it’s VERY easy to change things in the future…so don’t open a port “just in case”.

With swarm’s mesh routing you don’t have to worry about whether the apps are running on the same or different hosts, everything on the same overlay network will see each other regardless of what machine it’s running on. As well, in swarm, if you do end up publishing a port you can then get to that port from any manager node’s host information on the swarm (google swarm mesh routing for more information).

This was intended to be a bit more “high level” but to give you some ideas of how deployment might work for you based upon what solution you’re dealing with.

Using AWS’s services is always a good choice if you’re deploying your app to AWS, but in the case where you’re trying to do internal cloud, hybrid cloud, or if you’re in the process of trying to create an app internally that may need to migrate back and forth, there are reasons not to use vendor supplied services.

You just need create common network between your services to do this. I’ve created a clear example of how to create shared network between few docker compose services, hope it helps:

This looks nice - but - is there anyway to inject variables into the .sql? It doesn’t look there’s anyway to escape the context to get at the variables:

From the mysql docker-entrypoint.sh

		echo
		for f in /docker-entrypoint-initdb.d/*; do
			process_init_file "$f" "${mysql[@]}"
		done

Great tip, thanks! That’s can be useful.

Tried the same, but was getting stuck due to that error, thanks for figuring out brother.

With Regards,
Wovo Larjem