Architecture for a WordPress Docker Project

Hello,

I have a project where we want to offer a WordPress install (on subdomains) per registered user. Will it be correct if in a swarm

  • for each user i will create a service (each with 2 containers - wordpress and mysql )

  • have a nginx as reverse proxy to route each sub domain to the right container .

  • maybe use AWS EFS for volumes (if is possible ) in order to have persistent data for images and mysql db

I have tried a similar approach with Aws Ecs but is very expensive . Any suggestions for this project architecture ?
Thank you

it is possible to create user specific docker stacks, which each describe the two (swarm) services (wordpress and mysql). In the Swarm world, Each service will be scheduled as a task from the scheduler. Then a task executes the container on a node.

Instead of nginx, I would strongly suggest to go with traefik. You can simply add reverse proxy rules using service labels. Whenever such a labeled service is starting, the rules are instantly applied in traefik. Another approach would be to pusblish each wordpress instance on a different port and leverage ELB to forward traffic for a specific sub domains to a specific target group. A target group can consist of one or more ec2 instances and ports.

Also I am not sure if EFS is realy the right kind of storage. From my experience EFS is blody slow and preferably used to store data at rest. When used as a target for docker volumes, it can quickly become a bottleneck.

Don’t forget to put additional machines into account for log management (ELK?) and system monitoring (e.g. Swarmprom).

I agree with @meyay, i have setup something similar to what you’re doing, just with traefic and multiple swarm stacks, 1 stack pr customer.

But personally, i think aws is overkill for this (because of price), thought about something like digitalocean ?

Thank you for your input.

I will definitely use traefik as the reverse proxy.

Regarding EFS - Is there any other solution to prevent data loss ? I’m thinking that one EBS/Hdd i will have many clients and a hardware failure could turn into a nightmare,

Regarding Digital Ocean - i will definitely take a look and see if is more affordable.

Thank you again.

if you plan to have more than a single node, you will need to solve the “how to make the volume available on all nodes?” problem. The default volume driver creates volumes that are always local to a node, which will lead to problems if a servce task is scheduled on a different node. Also, pinning a service to a specific node (using labels) breaks the idea of self healing and spreading the containers across the nodes.

You either need a remote share like nfs or a volume plugin that handles “the magic” for you. We ended up with a dedicated EC2 self managed NFSv4 server per cluster(!) and putting the volume declaration for our nfsv4 shares inside the compose.yml of each stack. as a result the local volume handle will be created on a node when the service is executed on a node the first time.

I used glusterfs between nodes