I wanted my application to be available on the internet ONLY through a reverse proxy (in my case Caddy) and not also from it’s port.
For example, I have the myapp that uses port 1234. I could access the myapp with myapp.example.com through reverse proxy, but also with ServerIpAddress:1234 which is what I did not want.
I have found the following solution and I’d like to know if it’s correct or there is a better one.
I created a new network docker network create proxyNet
In docker-compose of my app I set the port to 127.0.0.1:1234:1234 and the network to proxyNet
Example:
In terms of Docker, a bridge network uses a software bridge which allows containers connected to the same bridge network to communicate, while providing isolation from containers which are not connected to that bridge network. The Docker bridge driver automatically installs rules in the host machine so that containers on different bridge networks cannot communicate directly with each other.
So basiclly docker bridge allows u to connect many containers in one network and automaticly comunicate each other even with DNS names of service.
for good practice, it is good to expose only a reverse proxy service to the world and leave the rest of the containers without exposing their ports. Then in the reverse proxy container you can use the names of the containers to forward traffic to them.
If your reverse proxy is off-bridge and a separate static application on the host then you need to expose the application to your localhost so that the static reverse proxy can communicate over localhost and port.
in your case, I would remove the entry for issuing port to localhost, write your site caddy to the compose file and only put this site to the world.
Then, in the container, caddy configured the forwarding of communication to the name of the application container so that the application service was not accessible via localhost, but only in the docker network. This is the safest solution.
Thank you @matixooo for the answer. I created the external network proxyNet because I wanted to connect there, all my containers (that are on different docker-compose.yml files, to be accessible from the internet only through the reverse proxy (which is also on the same network).
But I have a question:
How can I handle situations where two or more containers use the same port?
In my example I could use for app1:
If app1 and app2 containers are in the same external docker network u dont need exposing them on localhost they are accessible on container port in the network.
Excuse my ignorance, but by saying “not exposing to localhost” do you mean to remove 127.0.0.1 from ports or delete the ports completely?
If I remove 127.0.0.1 then the container is accessible by serverip:10001 which I don’t want.
If I remove the ports completely, there would be 2 apps with the same port 1234 on the same network bridge, which I don’t know if it works.
if you completely remove the entry about listing your ports from the compose, the application will be visible only in the bridge network, it will not be visible outside the docker network, so it will also not be available on the ip or localhost address.
if you do, the applications will be available to your proxy on your bridge without exposing them to any port on the host.
you will be able to ping them, for example, from another container in this network (you can check it by typing ping app02 in the container app01)
Docker bridges work like this, they allow for communication between containers within one network without the need to expose ports to the outside.
If you read my previous messages, you will see that app1 and app2 expose the same port 1234. The only way to access both is to use on docker-compose ports 10001:1234 and 10002:1234 and that is why on proxy I use reverse_proxy app1:10001 and reverse_proxy app2:10002
I suppose there is no workaround to this.
Update:
According to this post, it is possible. I’ll try it.
I am late to the party but would appreciate an answer
I have two server with 2 different docker instances. Lets call them server_x, server_y.
server_x has applications and server_y has applications. My nginx reverse proxy runs on server_y. All applications on server_y are in a docker network and not accessible besides the reverse proxy. Can I somehow do the same with server_x? Or is there a safer method? Can i bind the incoming ports on server_x on server_y:port or something? Or some magic with the docker daemon?
I haven’t read all the posts here, but your issue seems to be different and not what this topic was about. If you open a port. The original problem was the forwarded port which is not necessary on the target container when you have a proxy that receives the request and sends it to the target container on Docker network.
You have two servers so you need one of the followings:
Two domains. One for each servers and both servers should run a reverse proxy.
An overlay network between the two servers so the reverse proxy on one server can comminicate with a container on another…
You could play with SSH tunnels as well, but I don’t recommend it unless you just want to test something quickly and you know how to create ssh tunnels.
I am sorry if it’s too offtopic, but i got my idea from this quote. And It’s still in the theme of “I don’t want my containers accessible on localhost/ or ip:port but only via reverse proxy”. But i appreciate your answer, thank you.
I have 2 physical machines in my own network. I am exposing on one domain to have them bundled. I wanted to make it more secure and restrict access by ip:port and only through the nginx. It works great on the machine where the nginx is hosted.
Then this might be the way to go. In order to use an overlay network, you would need to enable the swarm mode on both hosts to make them swarm nodes, and perform docker stack deploy deployments instead of docker compose deployments. A docker overlay network will be spun across swarm nodes, so that the reverse proxy can use this container network to forward the traffic to containers, regardless on which host they are running.
Please read the docs to understand the advantages and drawbacks of the swarm mode:
This sounds great and like a perfect fit.
But a bold question: Isn’t docker swarm dying/dead? What’s the status und road map? But I will look into this!
The built-in Swarm mode is still in active development. Recently CSI support was added for volume plugins. I am not sure if I would call it dead or dying. Though, standalone Swarm is deprecated, and replaced by the Swarm Mode. The roadmap is covered in the Moby milestone: 25.0.0 Milestone · GitHub.
It is true that most enterprises base their container strategy on Kubernetes. Swarm can still make sense for smaller teams, especially If they lack experience with Kubernetes.