How to run multiple WordPress instances using docker?

So here is the situation. I have setup a WordPress container. The yml file can be seen here

This run perfectly if I do

docker-compose up -d

What I do after installing WordPress is that I update the site URL to match the project name(using search/replace plugin)

e.g. http://project1.local

And then I add it into my system’s hosts file like this project1.local

So this way the domain is set up for me locally.
Till this step all is working fine.

But now I want to setup another project e.g. project2.local
And I also want to keep project1.local running.

So I do docker-compose up -d from another folder using same yml file.
At this time, I get errors because the port I have used in project1 are already occupied.

Is there a solution for this so that I can run multiple containers for WordPress and run different project like



Thank You

This seems to be a private repository.

woops, made it public now :slight_smile:

You can define a base docker-compose.yml and define environment specific docker-compose yml files that override or extend the base declaration:

Another option, I realy like and prefer over the override/extend is to declare environment specifc values as variable and use envsubst (from the os gettext package) to actualy render values inside the compose.yml. Appart from not beeing able to conditionaly render content blocks, this approach has no limitation about what can be substituted.

It is important to export all placeholdes into the shell environment, because envsubst will only replace those in the input file:
Assumed you use a variable called ${VAR_A} as a placeholder in your yml. Then execute export VAR_A=myvalue && envsubst < my_docker_compose_template.yml | docker-compose -f - up -d to render the value into the yml and let docker-compose use it from stdin.

Does that make sense?

1 Like

Little complex to understand as I am still new to this and don’t know much about YML file format.
but I will try it. :slight_smile:

Here is an example of what I mean:

Copy the content of this block into a file called docker-compose.template:

version: "3.3"

  # Database
    image: mysql:5.7
      - ${MYSQL_VOLUME_DATA}:/var/lib/mysql
    restart: always

  # phpmyadmin
    image: phpmyadmin/phpmyadmin
    restart: always
      - "${PHPMYADMIN_HOST_PORT}:80"

  # wordpress
    image: wordpress:latest
    restart: always
      - ${WORDPRESS_VOLUME_BIND_MOUNT_HTML}:/var/www/html
      - ${WORdPRESS_VOLUME_BIND_MOUNT_INI}:/usr/local/etc/php/conf.d/uploads.ini
      - "${WORDPRESS_HOST_PORT}:80"
    restart: always


Next, create a bash script calle with following content:

#!/bin/bash -eu
export MYSQL_VOLUME_DATA=db_data
export MYSQL_DATABASE=project1
export MYSQL_USER=wordpress
export MYSQL_PASSWORD=wordpress
export WORdPRESS_VOLUME_BIND_MOUNT_INI=./uploads.ini

envsubst < docker-compose.template | docker-compose --project-name ${PROJECT_NAME} -f - $@

then make executable with chmod +x Instead of calling docker-compose up -d you need to use ./ up -d. The bash script will render the configuration and pass all commands to docker-compose. For each environment creat such a bash file and customize its values. The variable PROJECT_NAME needs to be different per environment - otherwise docker-compose would replace containers of another environment or complain about orphaned containers.


Ah right, I got it now. Its like creating .env files on node js projects. but by using shell scripting. Pretty nice idea. But will this work on Windows ?

I wish you would’ve pointed out earlier that you use windows.
The gettext package is common to linux distributions… not to windows.

If you installed the git client for windows, you have a bash shell and envsubst on your system. Though, you will need to call the script from the git-bash shell.

1 Like

Thank you for your effort in explaining. Others read it also. Does every exported row in the bash script need to be modified and have a unique variable value?

Did did you achieve your desired goal? Can you share please? You were on a promising path initially.

You will would want to change thses at least:

  • WORDPRESS_VOLUME_BIND_MOUNT_HTML: set it to the folder you need
  • WORdPRESS_VOLUME_BIND_MOUNT_HTML: variable has a typo (d instead of D), set to the foler you need
  • PROJECT_NAME: set it to something more meaningfull as the network and volume will be prefixed by this value. If you want to deploy more stacks by copying the sh file, make sure each .sh file has a unique value for PROJECT_NAME, to make sure the deployments do not interfere with each other! -
  • *_PASSWORD: change to the password you want or leave as is.
1 Like

Thank you so much for this. May I just ask still… can I make 10x containers like this - each with a different instance of a Wordpress site in Linux Fedora Docker by running the .sh executable?

You can copy the .sh 10x, use indidual BIND_MOUNTS, PROJECT_NAMES and HOST_PORTS and it will work - every instance with their own state, accessible by a different host port.

Though, in such a situation you might consider to add a reverse proxy like traefik that allows to route incomming traffic based on the subdomain to the target wordpress instance. You will want to use a dedicated network where the reverse proxy and all wordpress instances live - but then you will want to have unique WORDRESS_SERVICE_NAME value for each stack as well, as it would lead to a service name collision in your “reverse proxy” network if you don’t.

1 Like

Thanks so much. You point me in the right direction and even more than that. You recommend traefik and not nginx? You surly have a justified opinion? You inclined to traefik more?

I still use traefik 1.7.x in my old swarm cluster, as it allows to issue letsencrypt certificates and store them in consul to be use by all nodes. treafik 2.x removed that feature.

On docker/docker swarm Traefik is probably one of the most powerfull reverse proxies that exist. Though, you have to learn how to write its rules as container labels. I never experienced any DNS caching or other sorts of problem with Traefik. Once deployed, Treafik will be watching docker events and fetch the labels to apply the reverse proxy rules for you: the configuration is bound to the lifecycle of the service.

Though, if you feel comfortable with nginx, it can get the job done is well. You will have to maintain the rules in conf files and either mount them into your nginx container or build an image that already includes them. If you go down the nginx road, make sure to address the dns caching issues that can happen if not configured properly.

If you feel unsure, you can always take a look at the nginx proxy manager, which provides a nice ui for the reverse proxy rules.

I personaly like things to be automated - I try to avoid configuration ui’s as much as possible.

1 Like

I am much obliged for all this. Do you use the Brave browser and the in-built BAT wallet? I want to tip you in the browser’s native digital currency. Are you a verified publisher also? You could be with this amount of knowledge.

You are welcome! It’s a please to be of help.
Lets say forum support is just my type of cross word puzzles :slight_smile:

1 Like

Hello! Can you explain why do I need Treafik or ngnix again as a reverse proxy if I for example just want to have a dozen local WP sites inside of 10 completely isolated docker containers? Just a local dev environment and then I just export a finished WP site and import it into some hosting that already has a domain and everything. Thank you so much in advance.

You don’t neccesarily have to use a reverse proxy, though you will want to use one. A host port can be bound by a single process at the time. Of course this applies to the http port 80 and https ports 443 as well. So you either bind these ports to one of your wordpress instances or a reverse proxy (or anything else - but as a result none of your wordpress instances would be reachable using port 80 or 443 then)

Lets assume you registered two subdomains pointing to your docker host: and and you would want to access them by and, you would need a revese proxy that binds port 443 and forwards the traffic to the specific instance matched by the subdomain.

Now lets assume you don’t want to use a reverse proxy, then you would need to access the instances by their specific port. In this case it doesn’t matter if both share the same domain name or have distinct domain names - for the sake of simplicity I will use in the example: you would access them by${whatever port publised for wp1} and${whatever port publised for wp2}. This might be perfectly fine for something in a home lab and development environment. But isn’t suited for productive workloads. To remedy this problem, you would need to add a loadbalancer in front of your docker host to map different domain names to different target ports of your docker host.

1 Like

Thank you so much for elaborating. I am going to use Docker for multiple WP site instances and access them by localhost:${whatever port publishe for localhost} in a home lab and development environment running on pretty responsive PCIE_x4 SSD NVMe. That makes me happy as a pig in mud. You sure do love your $variables and automating things with scripts. I will try to automate these variable thingies you suggested earlier using Powershell. Just to make you happy a bit. :slight_smile: