Hello everyone,
I am trying to create a platform where users can dynamically deploy 1 or more docker containers, and those containers get one hostname, and also map the ports of all containers to that hostname, where that hostname is available on the internet. All this needs to happen on one docker engine. We need to be able to dynamically create new deployments and also dynamically take them down.
Specifically, users are able to specify the image(s) they would like to deploy from our private registry and the ports to be exposed on each container (they would also include environment variables but this is not related).
We would like to use the user’s UUID to create a custom FQDN, for example $UUID.containers.example.com
, and have all containers requested by that user being served on that FQDN. So if for example they asked for a postgres database and a frontend, and requested that port 80 should be forwarded on the frontend and port 5432 from the database container should be forwarded, then they would both be available at $UUID.containers.example.com:80
and $UUID.containers.example.com:5432
accordingly. Of course, before trying to launch the containers, we will check if there is a port binding issue and let the user know.
I’ve read that when setting the same hostname on two containers, then docker will automatically redirect requests depending on the port to the container that exposes that port, so I tried to use this to my advantage.
Now the issue is how to dynamically relate those FQDNs to the containers, and map their ports. Currently, I tried using coredns and this Corefile:
containers.example.com {
forward . 127.0.0.11:53
log
}
. {
forward . 8.8.8.8 8.8.4.4
log
}
with a wildcard DNS record on containers.example.com
pointing to the docker host. This basically forwards requests for containers.example.com
to the Docker DNS server. The issue that seems to exist is that if I run a sample docker container using
docker run -d --name nginx-sample1 --hostname uuid1.containers.example.com -p 80:80 nginx
and then another one using
docker run -d --name nginx-sample2 --hostname uuid2.containers.example.com -p 80:80 nginx
Of course I can’t bind both containers to the same port. Also, if i only bind one to that port, and leave the other container with no bound ports, requests will be routed to the one with the bound port. Hence I think this attempt is a no-go at solving this issue.
I am really not sure how I should approach this.
Also, ideally, later down the line, there is the question of allowing users to VPN into the subnet of the containers to allow them to perform debugging etc. Auth is currently a non-issue which we plan on tackling once we have a good solution for this issue. We would also like to limit the amount of containers per user to just what they requested, so we would ideally like to figure out a solution that does not need a reverse proxy per user.
I am not sure if I should be tackling this as a completely infrastructure issue or if I should try to use docker’s inbuilt tools for this.
Using traefik, or any other reverse proxy, is out of the question, because users need to be allowed to bind to any port they would like, and I cannot define every port to be an entrypoint on traefik. I can maybe force some exceptions for port bindings to specific ports.
How should I tackle this problem? I am quite a novice in networking and any thoughts of using docker’s networks seem to be restricted by the need to use a reverse proxy, which I cannot use because of the issue of allowing all ports to be used. Maybe some sort of mode like macvlan
or ipvlan
is actually useful in this case?
Thank you so much for reading and I hope we can solve this!