This is more of a question and less of a bug. I’ve read over the forum topics for Docker for AWS and haven’t found an answer for the question of the expectations of Docker for AWS in conjunction with ELBs.
I’m currently using docker version 1.13.1 with Docker for AWS in dev environment consisting of 1 manager and 3 worker nodes.
Our hope was to use one Swarm cluster to support our dev, qa, and stage environments with separate stack installs for each using isolated overlay networks, etc, etc. our application makes use of microservices so we have a total around 10 docker containers that all have to work together in order to have a functioning environment.
For status purposes we have internal routes on our microservices for version information, etc. So in past while using ECS for deployment we were able to use security groups to make “internal ports” available for diagnostics and then only associate 1 or 2 ports for the purpose of serving HTTP traffic to end-user clients through the ELB.
With the behavior observed with Docker for AWS anytime a “host port” is specified in the docker-compose stack configuration it is automatically mapped to the ELB. This makes sense, but unfortunately we do not necessarily want to make those services publicly available.
At the moment we’ve denied ourselves the access to the internal ports and we have ELB parity with the public facing app working as intended.
So in order to satisfy our ideal situtation it would be nice to have internal vs external ELBs. A couple of ideas that we’ve considered are as follows:
Convert the Docker for AWS into an internal ELB and use another ELB to map only the public ports. So the behavior would be the same from the standpoint that everything with a host port is attached to ELB, but nothing is publicly available. One issue with this is that AWS doesn’t support ELB => ELB communication so in order to make this work you have to have something like [Public ELB] => [NGINX Cluster] => [Private Swarm ELB].
The next option is to only expose public ports like 80 and 443 via ELB not using any host ports and setup nginx using reverse proxy virtual host configuration for various services. There’s still the issue of locking down who has access to what.
Something else we tried was setting security groups for 80 and 443 to 0.0.0.0/0 and setting internal security groups for other ports that should not be public. However this didn’t seem to work using the internal scenario above with [Public ELB] => [NGINX Cluster] => [Private Swarm ELB]. Giving the NGINX Cluster security group permission to External ELB securityGroup ports of interest failed. This is with an existing VPC Peering and routing that was functioning to provide VPN access into docker swarm as well as access to external Database resources at the moment.
So the question is how is a docker swarm cluster supposed to be used? Since purpose? I’m thinking that’s not the case. I feel like the public wide open ELB that ships with Docker for AWS is great for turn key, but not great for the scenario above.