DB service: restrict new service based on IP without downtime

Hello community,

TL;DR: I’d like a simple way to restrict access based on source IP in Mongo, but my client web containers are all on the same network and impossible to set their IP address static.

Here’s a architecture challenge for you:we have one Mongo DB service with many databases, one per customer.
Connected to that, we have many client web containers, one container per customer.Our goal is that web container A cannot access database B.

With standalone docker, it’s simple :

  • you create web container B (with a fixed IP address in docker-compose.yml)
  • set this source IP as ACL in MongoDB for database B.

Now for docker swarm, I cannot do that (set a fix IP address for the replicas of web container B)Here’s what I tried:

My idea is then to create one network per customer.
Web replicas B → overlay network B → Database B (ACL: CIDR adress of network B)

However, adding a new network to Mongo DB service means updating, means recreating new instances, means importing the encryption-at-rest key each time: I don’t want that So why not provision a bunch of network in advance, and use them as we add customer?
My problem here is that I cannot change name of networks
So I cannot rename say a network called provision-14 to network-B for example

Of course, there’s the option of a little RP between my networks and my MongoDB service, but I’d like to avoid adding a new component.

I don’t see anyone struggling with the same type of architecture issue online, so I wonder if I’m missing something here? Is there a simpler architecture solution?

Thanks in advance for anyone who reads and answers! (modifié)

You can set the internal Docker IP of every container you launch, so you could also use that IP in MongoDB ACL. It’s just a lot of manual work.

You could also create a separate Docker network for every client with own IP range, then add that Docker network to the MongoDB instance. It will also enable IP ACL, even for an IP range if you need to scale to more containers per client.

We run a MongoDB cluster with a database for every Docker Swarm service per tenant, we just use a different DB user per tenant.

Thank you for your answer!

That’s exactly what we want to do

We did that before automatically, with the standalone docker (no swarm). One network for all tenant, examine free IP addresses in that network, assign manually this IP address in the docker-compose and put it in the MongoDB ACL.

But it seems it’s impossible to do that with docker swarm (set a fixed IP address for a container).
The line is simply ignored, and forums say it’s not possible.

The problem with that option is that the docker service needs to restart, thus the DB, thus means to reimport the encryption key each time we add a tenant…

Yeah, Docker Swarm is a bit limited when you really get into it. Same with using configs and secrets, the container is always restarted. But of course I don’t want to restart my proxy under heavy load just because one of many TLS certs has been updated.

Because of the limitations and the almost non existent development, many have left Swarm for k8s, our project just don’t have 2 FTE to manage it all :laughing:

If it’s about security, there are many things you can improve with Docker: