Docker Community Forums

Share and learn in the Docker community.

How to reuse existing backend containers for additional web server stacks?

If I already have a LEMP stack and I want to create a 2nd (3rd, 4th, etc.) it seems very (resource) wasteful to spin up additional php and mysql containers. How can I point additional Nginx containers at my existing backend containers?

I found this but it’s a couple years old. Is this still the best practice?

It mentions

You might want to make sure that your containers are on the same network using the networks directive. Then they will be able to reach each other via their container_name .

but my knowledge of Docker is insufficient to know how to use that. Likewise

docker-compose -p SOME_PROJECT -f shared-services.yaml up -d

is confusing for me. Is “SOME_PROJECT” something that I can just randomly specify? Or is it a parameter that exists in conjunction with some containers I’ve already spun up? Also, I’m not sure what shared-services.yml should look like.

If someone could post a shared-services.yml along with app1 and app2 docker-compose files that I could copy and modify, it would be greatly appreciated.

Yep, deploying n docker-compose stack for n environments is still best practice. Though, it is exactly the opposite of what you aim for.

Yes. -p or --project-name is used by docker-compose to distinguish different docker-compose deployments.If you run aboves command with -p A, then again with -p B, you will end up with two distinct deployed docker-compose stacks.

Docker’s layered storage implementation is designed for portability, efficiency and performance. It is optimized for storing, retrieving, and transferring images across different environments. When a container is deleted, all of the data written to the container is deleted along with it.

As a best practice, it is recommended to isolate the data from a container to retain the benefits of adopting containerization. Data management should be distinctly separate from the container lifecycle. There are multiple strategies to add persistence to containers. We will evaluate the options that are available out-of-the-box with Docker, followed by the scenarios that are enabled by the ecosystem.

Host-Based Persistence
Host-based persistence is one of the early implementations of data durability in containers, which has matured to support multiple use cases. In this architecture, containers depend on the underlying host for persistence and storage. This option bypasses the specific union filesystem backends to expose the native filesystem of the host. Data stored within the directory is visible inside the container mount namespace. The data is persisted outside of the container, which means it will be available when a container is removed.

In host-based persistence, multiple containers can share one or more volumes. In a scenario where multiple containers are writing to a single shared volume, it can cause data corruption. Developers need to ensure that the applications are designed to write to shared data stores.

Data volumes are directly accessible from the Docker host. This means you can read and write to them with normal Linux tools. In most cases, you should not do this, as it can cause data corruption if your containers and applications are unaware of your direct access.

There are three ways of using host-based persistence, with subtle differences in the way they are implemented.