Expectation for Docker for AWS ELB

This is more of a question and less of a bug. I’ve read over the forum topics for Docker for AWS and haven’t found an answer for the question of the expectations of Docker for AWS in conjunction with ELBs.

I’m currently using docker version 1.13.1 with Docker for AWS in dev environment consisting of 1 manager and 3 worker nodes.

Our hope was to use one Swarm cluster to support our dev, qa, and stage environments with separate stack installs for each using isolated overlay networks, etc, etc. our application makes use of microservices so we have a total around 10 docker containers that all have to work together in order to have a functioning environment.

For status purposes we have internal routes on our microservices for version information, etc. So in past while using ECS for deployment we were able to use security groups to make “internal ports” available for diagnostics and then only associate 1 or 2 ports for the purpose of serving HTTP traffic to end-user clients through the ELB.

With the behavior observed with Docker for AWS anytime a “host port” is specified in the docker-compose stack configuration it is automatically mapped to the ELB. This makes sense, but unfortunately we do not necessarily want to make those services publicly available.

At the moment we’ve denied ourselves the access to the internal ports and we have ELB parity with the public facing app working as intended.

So in order to satisfy our ideal situtation it would be nice to have internal vs external ELBs. A couple of ideas that we’ve considered are as follows:

Convert the Docker for AWS into an internal ELB and use another ELB to map only the public ports. So the behavior would be the same from the standpoint that everything with a host port is attached to ELB, but nothing is publicly available. One issue with this is that AWS doesn’t support ELB => ELB communication so in order to make this work you have to have something like [Public ELB] => [NGINX Cluster] => [Private Swarm ELB].

The next option is to only expose public ports like 80 and 443 via ELB not using any host ports and setup nginx using reverse proxy virtual host configuration for various services. There’s still the issue of locking down who has access to what.

Something else we tried was setting security groups for 80 and 443 to 0.0.0.0/0 and setting internal security groups for other ports that should not be public. However this didn’t seem to work using the internal scenario above with [Public ELB] => [NGINX Cluster] => [Private Swarm ELB]. Giving the NGINX Cluster security group permission to External ELB securityGroup ports of interest failed. This is with an existing VPC Peering and routing that was functioning to provide VPN access into docker swarm as well as access to external Database resources at the moment.

So the question is how is a docker swarm cluster supposed to be used? Since purpose? I’m thinking that’s not the case. I feel like the public wide open ELB that ships with Docker for AWS is great for turn key, but not great for the scenario above.

1 Like

This is a great question. Others have asked for similar flexibility and we’re exploring options. We’re currently limited by the inflexibility of the Docker port-publish syntax.

There might be better workarounds than around you’re currently using. Can you elaborate on how you’d like to access ports that are serving version information? Note that if you don’t publish ports publicly (and thus through the ELB), the ports of services will be available internally to other services on the same overlay network.

@friism Thanks for your quick response.

We’ve been using the publish ports feature and integration with the external ELB to exposes our services for consumption.

What we’re finding is that we would like to setup difference security groups for exposed ports on the ELB. This is mainly an issue with you have external applications interfacing with a swarm cluster. In our case we have a couple of API services that are B2B with another AWS account in a specific security group. So we want to limit access to port x, y, and z, but leave port 80 and 443 set to 0.0.0.0/0.

My understanding from reading the forums is that any manual configuration of ports can and will be overridden by the automated nature of the publishing of public ports.

So one option is if the ELB was entirely private you could use an external ELB for public access of specific ports for public vs private / internal, but external to swarm.

The only other thing we’ve found contention with is having multiple lower environments hosted on 1 cluster. I’m assuming that’s an expected use case. We have to either expose multiple ports on the swarm ELB and use an external ELB to map 80 /443 => swarm non-standard port or we have to have 1 nginx service listening and terminating SSL on 80 / 443 for the entire swarm. This also has the added requirement of not being automatic.

I’m all ears on improving what we have today if there’s additional workarounds.

1 Like

Any updates on this?