Network access control between containers for reverse proxy

Forgive me but my google-fu is failing me.

Im new to docker and on my journey to try and learn the basics so I can implement at my workplace if appropriate.

To start with I’m trying to understand the appropriate way to set up a reverse proxy.

In its simplest form I have a reverse proxy and an application which I want to expose the web interface for through the proxy. I understand how to create a proxy network which passes the port to the host, add a private network between the proxy and the app.

My question comes from what happens when the application is more complex and is listening on multiple ports?

In a traditional network you would restrict the communication between the proxy and the application to only the single web port.

In a container environment, I’m assuming by adding the proxy and the application to the same network, the proxy would be able to access all ports the application is listening on. If this proxy is compromised in any way this would expose all ports and not just the one required for the access.

If I only want to expose the single port for say a web interface between the proxy and the app is there a way to do that. It seems like it’s a security concern to allow unrestricted access to all ports.

Forgive me for what is certainly my lack of understanding and thanks for any help/advice.

1 Like

That is a great question!
Unfortunately, Docker does not support that kind of restriction in its networks

You may be able to set up iptable rules within the container after setup

Generally I believe Kubernetes has that kind of configurations on its networking
I do think it would be a great addition to Docker though, I wonder if there’s a feature request for it, because you’re not the first to ask

I’m not sure I get the problem. You don’t have to make every port of a container public. You decide which one will be available through the proxy. If you have a proxy server even without Docker, if someone breaks into the server that runs the proxy, that person could access everything on the network.

If an app has ports that should be protected even when someone has access to the proxy server, you can add a second container in the “private docker network” listening only on one port and configure the proxy to forward the request to that container which can then forward the request to the final container in the private docker network. It would be two proxy containers then. But the internal proxy could be a socat container I guess. But I haven’t used it enough to know if it has any downsides to use it in the long term

Of course if someone breaks into your external proxy container and from that container they can break into second proxy container, you are where you started. But this is one reason of why it is a good practice to not install any tools in the container that the process doesn’t need. If proxy for example runs as a non-root user, the hacker can run commands only as non-root. Then they will not have right to install any tool. Even better if there is no package manager inside. Yes, a hacker could make a binary available on the internet and download it, but it is very tricky if you don’t have a tool for downloading like wget or curl. It can be done even with a pure bash shell, but much harder and in some cases there is no shell in the container either.

All depends on how secure system you want.

Thanks for the reply. Unfortunatly your answer is what I wascoming too. Sounds like there is another tool i need to learn. As awways :slight_smile:

A good practice is to separate the responsibilities, as is the heart anad soul of containerization/SRP - The user interface to one container, and the internal solution to another, then separate the networks

  • The proxy network will have the proxy and the frontend
  • The app network will have the frontend and the backend
  • If you have a database you could also have a third internal network with the backend and the db containers

Thanks for the reply. IM struggleing to follow you on the specific terms your referencing but i’ll do some reading around to get a better understanding.

To try and explain myself a little better:

Lets say i have an application which is running a web server and has a user front end exposed on port 443 and an admin front end exposed on port 8443. This container is part of a network called ‘application’

Now I can run a proxy, lets say NPM for simplicity to allow , with a network of ‘proxy’. I can also add NPM to the ‘application network’. On doing this however im essentiall allowing NPM to see both the user and admin ports. In a traditioanl network you would wither expose these ports on a different vlan/physical card, or restrict access with firewalls rules.

Its my understanding that there is no built in method for doing this in docker and so long as a container is on the network it is essentially a basic switched network where everything can see everything.

Hi, thanks.

I can see how this would work for a multi tiered applicaion stack. Bot not nessasarilly for apps which might have multiple ports for the same component exposed. A web server from my example above, or a siem listening on the different ports for different devices, or a DB listening on differnt ports for different apps/interfaces.

Im prob missing the mark a bit on expecting docker to be able to perform like a ‘full fat’ infrastructure and thats where I maybe need to look at kubernetes or somthing similar.

No, I get it
I have a similar container that holds an app which listen on five different ports

As @rimelek suggested, a second, different proxy would require whoever hacks you to find exploits in both services to establish communication with your app’s other ports

If there is an admin port which automatically listens in the same container as the user frontend port, that is more like an application issue. At least when using containers. When you run a process without containers, you have the entire host as your “playground” and you can make the processes listen on any IP address. When you use containers you do it to isolate the app from the host. If you put two applications into one container where one is an admin frontend, the problem is not just what the proxy will see but that if anyone hacks the user frontend will immediately have access to the admin port internally. Even if your admin frontend for example checks where the request is coming from (like it could happen in a Symfony PHP framework), the request will come from localhost. You would have a similar problem if you put a database service into an app container. Some users can access the DB only from localhost making sure that only a root user can access it, but you would make it available for your application which is used by a public community.

You can have multiple apps in an image for simplicity for example, but it is better to make it optional which one you want to run. For example:

docker run -d --name admin-frontend -e FRONTEND_MODE=admin myimage

and

docker run -d --name admin-frontend -e FRONTEND_MODE=user myimage

If these containers need some data like uploaded images in a photo gallery you could use common volumes:

docker run -d --name admin-frontend -e FRONTEND_MODE=admin -v photo-gallery-data:/app/data myimage

and

docker run -d --name admin-frontend -e FRONTEND_MODE=user -v photo-gallery-data:/app/data myimage

By recommendation with an additional proxy container was for the case when you were not the developer of the app, but you still need to use it so you give access to the user frontend through the proxy while you map a loopback ip from the host to the admin frontend.

docker run -d --name admin-frontend -e FRONTEND_MODE=admin -p 127.0.0.1:8443:8443  myimage

The above command shows only the how I would map the host port to the admin frontend. That way you could access the adminf rontend through SSH tunnel, but if you have multiple host networks, you could use a LAN IP addres too.

Then a second proxy container would forward requests to the user frontend. Using nginx or even socat that I mentioned earlier https://hub.docker.com/r/alpine/socat

Only the socat proxy would be available on the proxy network for example. If you find a proxy image which is “distroless” or a single binary thn the attacker would not even have a way to execute a shell in the container. But I don’t have more time so this is what I could describe quickly for now.

UPDATE:

By the way when using Kubernetes, you normally have one network, but you can use Network Polcies which is great, but not always easy to configure. It could solve your problem, but I would not switch to Kubernetes only for what you want. If you want to use Kubrnetes anyway, then okay. But otherwise you would have bigger problems with Kubernetes if you are not familier with it.

Thanks. I apreshiate you taking the time to reply and its certainly food for thought.

WIth regards to kubernetes, im prob going to end up there anyway as ultimatly there is requirements for high availabilty on at least some of the services we use. I just didnt fancy jumping in at the deep end and though i would see what docker could by itself first.