But my in-house application is using the port 9100, the couchbase container is also using port 9100 (for clustering administration) and I don’t want to change the port 9100 in my in house application.
The couchbase container has no option to override the port 9100 so even if I publish the port 9100 to an another port (using -p option), it won’t change anything because the couchbase is still listening on port 9100
Is there a workaround in docker so that containers can communicate on port 9100 without conflicting with the local application?
Since your containers are using the host network, the only way is to configure the application inside the container. Or don’t use the host network and use port forward instead.
I can’t use port forward because I can’t change the listening default port from the application inside the container.
For example if I use port forward : -p 19100:9100 for my couchbase container on both docker server… The 2nd server will try to reach on server 1 on port 9100 because there is no option to override the default configuration inside the container.
So I need to find a solution with docker to avoid port conflict between the container and the application locally hosted on the server (both are using port 9100). idk if there is a solution from docker ?
A port can only be bound by a single process for a specific ip. You can not run an application on the host using a port and start another processes trying to bind the same port, and expect it to succeed. Except both bind to different ips. And this is already without Docker being in the picture. Docker (or anything else) is not able to bypass these restrictions.
Just out of curiosity why do you think this is something docker should solve?
If both applications use the http protocol, you could run a reverse proxy on port 9100 and use different ports for your app and the container. Then use domain name based reverse proxy rules to forward traffic accordingly. If you can’t set up domain names in your local dns server, you can modify your hosts file to archive the same (at least local on your machine).
Just out of curiosity why do you think this is something docker should solve?
Because I thought that docker could solve this problem with its network. As for example when I am running a docker swarm or two containers on the same servers I can create “their” own network. Idk if it is possible to create a specific network (without swarm) accross several servers.
I’m just trying to know if there is a solution with docker.
I am not sure how the swarm example aligns with your situation. Running containers in different networks, doesn’t solve that both can not publish on the same host port of the same node, it also does not solve that they can not publish on a host port that is already used by another process.
You never told us your host os is. Only on Linux, you could leverage a macvlan network to assign a lan ip to a container.
If you don’t use Linux, you need to find a way for your os to use a vnic and assign it its own ip. Then you need to make sure both applications do not bind the ip 0.0.0.0, but instead bind the specific ip of the interface.
I think I understand. Although it doesn’t solve the issue in the topic title, but if an overlay network is configured between two servers, the two instances don’t have to use a public IP address to communicate, but an IP address on Docker network. Port 9100 is for node-to-node communication so it doesn’t have to be available for users.
Yes I was referring to the overlay network but it needs to initiate/join a docker swarm.
I don’t know if it is a good/bad idea to create a swarm only for 2 servers
But if I have no choice . I will do with docker swarm
You could do something similar without Docker Swarm, but this is the easiest way if you are already using Docker. Without Docker Swarm you would need another service and probably some datastore for the service to create an overlay network. You can create a swarm cluster just for the overlay network without running containers in swarm.