How do people manage IP:port mappings?

I am running a handful of containers inside of a managed Docker Cloud setup.

There’s 8 containers, 3 nodes (hosts)

Each of the 8 containers includes port 22, 80, 443 - a typical LAMPish setup.

Obviously I can’t take over the host port 80/443/etc. So I leave it up to Docker to do random port assignment, or even fixed port assignments (i.e. for port 443 container 2 is 32443, container 3 is 33443, container 4 is 34443, etc)

However, the containers could actually be on any of the host nodes. Docker Cloud provides the *.svc.dockerapp.io endpoint, and even *.cont.dockerapp.io endpoints, but I have to build a service registry myself to keep tabs on the current mappings and then subscribe to updates (which I halfway have working, but it’s kinda ugly)

The issue is all of these have frontend addresses.

https://web1.foo.com
https://web2.foo.com
https://web3.foo.com
https://web4.foo.com
etc.

So they can all go through the same “load balancer” with a wildcard cert, which will redirect traffic to the appropriate service or container endpoint. That was what I was thinking I would have to do. But it seems pretty gross and very manual. Is there something someone has made? Seems like AWS ECS has managed to figure this out, or they just give each container it’s own private IP (unsure) - which would be great, but it doesn’t look like I can do that under Docker Cloud --net=host or in AWS, period, I guess?

You don’t need to use a wildcard certificate-- you can use something that can do SNI and present the correct certificate based on what the client is requesting. All major browsers should support this today (5 years ago, this wasn’t as much of an option).

You could set up an nginx service that listens on port 443 and has multiple server blocks, one for each service you are running. each block can reference a different SSL certificate.

I’ll let someone else comment on best practices for cloud for this usecase-- there’s may be an existing project out there that can query the cloud api to build your config file on the fly, for example.

Yes that’s currently what I was looking at doing - this project employs a similar concept to the idea: https://github.com/willrstern/docker-cloud-nginx-load-balancing

However, there’s a couple glitches with it for my setup and I realized I could get by with something simpler. Also I know PHP, not node/JS, so tweaking it would be easier in my own language.

Instead of each node having a single load balancing instance that listens on the Docker API websocket for changes, which in theory could miss some if they aren’t all online at the same time, and don’t have an initial index on start, I’m looking at it from an approach of I will setup a job to pull the relevant data out periodically from the Docker Cloud API and build the mappings myself.

Again, this is still oddly manual in a world with etcd, Kubernetes, CoreOS, Fleet, etc, etc. - it seems like I am overengineering or tackling a problem much smarter people have already written tools for.

Another followup question that I haven’t been able to figure out.

I wrote a little tool called “ec2ddns” which helps assign useful hostnames to an internal Route 53 zone (https://github.com/mike503/ec2ddns)

This little tool works great for IP-level resolution. But once I begin to use containers, I need IP and port for proper service orchestration. Relying on DNS resolution (with a low TTL) when service changes happen is great way to reuse a fundamental tool, but now I need to add another dimension into the mix, and not sure how to approach that.

Ideally I could just have all my containers get their own IP; this would alleviate all these issues. But net=bridge doesn’t do that, and net=host doesn’t appear to do that exactly either, and those are Docker Cloud’s only net options right now.

(I still don’t even know if AWS VPC DHCP will even issue multiple IPs per network interface or not)

The solution I use (with Amazon ECS, and this seems to be the recommended ECS setup) is to set up an EC2 load balancer for each service, and then set up a Route 53 DNS name as an alias for the load balancer. I statically assign ports to each service, and the ECS agent (given the right permissions via an IAM role) can register hosts with the load balancer.

This winds up being a lot of moving parts, especially given the additional setup required to use ECS, and I use Terraform as a management tool. It’s not quite as boilerplate as I’d like it to be (one service exposes 4 ports, and is public; most others only have one port and are internal-only) but it works fairly well.