Let’s say you have a single machine running Docker and you want to host multiple projects (& and their containers) on the single machine. Is this possible without to assign every container a different port?
For example, at the minute I have a machine running docker, and I have each container setup on a public network so each container gets it’s own IP address, this works fine for hosting a single projects containers (PHP, MYSQL), but when you want it to host multiple projects you can’t use the same port numbers anymore.
Is it possible to create a stack with it’s own network which can be assigned it’s own IP address from a DHCP server on the network (a router for example), and then in each stack have containers such as PHP and MySQL setup on their original ports. This way there wouldn’t be any conflicts as each stack would be accessible via their own IP address?
If this is possible, can it be setup in a docker-compose.yml file?
My server is running XenServer, so I can get around the above problem by creating a VM for each project, but I’m wondering if there’s a way to avoid this and do it all on one Docker host?
At the end of the day, clients make a TCP connection to a specific IP address and port, and that reaches a single service. (Docker or not.) Docker has a “preferred” model (where the host NATs internal networks, and every published service is reachable on a non-standard port on the host) and stepping outside this model can get tricky. It sounds like you’ve experimented with this some already.
I’m not sure what your broader environment is, but I can give you two “think bigger” suggestions.
One common setup on AWS is to deploy a load balancer in front of services (again, not Docker-specific, just in general). If you’re not on AWS you could set up your own haproxy to do this. Set your load balancer’s front-end to your preferred IP address and port; set the back-end to the Docker system’s IP address and per-service port. Now you need one load balancer per published service (and if you’re not using a cloud-provided load balancer, IP shenanigans to maintain the public addresses) but you get some extra flexibility if you need more than one host, alternate container versions, and so on. (If you’re on AWS and are using its EC2 Container Service product, it will maintain load-balancer backends for you too, IIRC.)
It also sounds like you’re getting up to the scale where Kubernetes could be beneficial (“multiple concurrent multi-container LAMP stacks” sounds like more than a single-host deployment). It’s not trivial to set up (unless you go in for something prebuilt like Google Kubernetes Engine), but each “stack” can become a Kubernetes namespace, you get natural DNS names and ports for services within the cluster, and a Kubernetes Service object can create and maintain the load balancers for you.
In theory there’s no reason you couldn’t obtain or assign multiple IP addresses for the host, and use docker run -p’s optional IP address argument to bind published servers to specific interfaces (docker run -p10.20.30.40:80:80 httpd). I don’t know how maintainable this would be going forward; I’d look into tools like Ansible to automated the steps.
If you go down this path, it looks like there is an option to “docker network create” to specify the default host IP address for a network, so I might create a Docker network per “stack”. There’s probably equivalent options in Docker Compose, in fact, so if you can manage the host IP addresses, you might be able to otherwise do this purely within Compose.
Just run a load-balancer/reverse-proxy. The proxy exposes the public port(s) that are available to external apps and maps to the services and their internal private ports of your containers (that the part that uses service discovery that docker provides OOTB). You can apply this pattern in a couple of layers depending on what you need. Its a common pattern to deploy an infrastructure load balancer to spread requests across the set of nodes in your cluster, so if you are using AWS an ELB or ALB, for level 4 or level 7 respectively (you can add host header and/or resource path routing rule to ALBs as well if you need to). At the container level you can deploy another reverse-proxy (say HAProxy) running as a global service running as a container on each node in your cluster. Again the proxy config maps the external port exposed (say 80 or 443 if you want to do SSL termination) to the internal app container ports using a host header to select the specific app).