Shared webhosting with docker

Hi All,

I had this idea to create a shared hosting server with docker and traefik. The server will be running like 20/30 websites for our customers. Our customers have wordpres and static sites. Everything is working. I have traefik with multiple wordpress sites up and running but now… The questions i’m having:
What is best practice to give the customer a login? How do i create a chroot like folder where they can put there wordpress files… or is it saver to keep the files in the container? If this is the case can i give customers ssh access within this container? Or

Just strugling with this for a while now what is best practice…
Or is it just a stupid idea to create a docker hosting platform for multiple customer… We are now working with plesk but i want to stop using that.
Hope anyone can push me in to the right direction.Thanks in advance!

Hi :slight_smile:

I think what i would do, is to create the following containers/stack pr customer:

  1. Wordpress/Static container
  2. MariaDB for database, maybe also phpmyadmin ?
  3. SFTP container, to give access, something like: atmoz/sftp (is chrooted) (maybe this should only be available for the static container?)
  4. A volume for that 1 customer

All of these i would create, as said, pr customer to be sure its isolated.
If you have a template for a docker-compose file, where you also define the traefik rules, you could easily spawn new customers

Hope it makes sense :slight_smile:

Thanks for the answer Terpz! That is what I was looking for! Create a volume and share this between worpress/static site to access the files with atmoz/sftp right? Still learning here… So if i’m correct I have to create volume not a bind-mount? I’m going to look at atmoz/sftp sounds good :sunglasses: !

I am creating a template in portainer for creating a docker stack with wordpress/mariadb/phpma/ with env. vars so all my colleagues can create a new customer :wink:

I don’t think there is a best practice. Best practice would meen that there are many people doing this having some kind of practice and they figure out the best together. I doubt that shared hosting based on Docker Swarm is a popular solution.

I would not start new MySQL database instance for each customer. That would require a lot of resources. I would rather run one MySQL server in cluster mode.

Do you have a remote storage solution? Even if you create a simple volume, that would be local. Not the best for swarm services.

Yes if redundancy is a a must, you need some kind of remote storage, or something like glusterfs to share storage between nodes.

Do you think performance would be a big issue? maybe it uses a bit more MEM but it will be easier to manage the other way, also you wont have single point of failure

Since this is a shared hosting and not just one or two friends who want to manage their own database server, it would be a big issue eventually. I have to admin that even though I know some database specialists, I am not on of them. When I started working with containers, I considered running separate database instances for each service but I very soon realized that even if I used only one instance, it could use at least half of my resources. Then I figured out how I could optimize it so I could at least run one database server. I wasn’t sure that I was doing the right thing, but still nobody told me that I should run multiple database servers. Maintaining one properly is more than enough. Again, since this is about shared hosting, stability seems more important than the advantage of having that isolation. Of course, when someone has so many resources that they don’t know what to do with the amount of memory and CPU, then sure, running for example 20 database servers and giving one for each customer seems more secure from the customers point of view, but I would not consider it stable. The hosting provider would have to deal with monitoring and security in each instance. If it were an sqlite database, then I wouldn’t see any problem with multiple sqlite database, since that is more lightweight, but MySQL, MSSQL, Oracle or Postgres are not databases that I would run per customer.

On the other hand, if the hosting provides some kind of namespaces like Kubernetes and can give access for the customers to those namespaces, assuming there is enough resources and there are proper limits for the customers, who can run and maintain their own databases and other services, that could work.

From my perspective this is kind of comparible with self service for development teams.

This is how I would implement it:

  • run one external replicated rdbms with different databases/schemas
  • use Kubernetes as orchestrator with a dedicated namespace per customer
  • provide a private container registry, where customers push the image of their application
  • some sort of deployment-pipeline that deploys the image as pod
  • use something like https://kyverno.io to contraint what the customer is actualy allowed to do in the cluster
  • provide a ui for tenant based logs and metrics for the pod(s)

idealy: provide a gitlab instance, make the customer push code there, provide a cicd runner per customer, provide pipeline templates that will generate images based on the code and deploy the images as pods. Use ressource contraints per namespace.

I would neither provide ssh access, nor would I support sftp/ftps file uploads. This is a relic from the old world that is not necessary anymore.

1 Like