I have 2 containers. One running jupyter notebook and another running mongodb. Both of them are in a bridge network and its in the default bridge network. I’m accessing jupyter notebook through my host.
But for some reason my jupyter notebook can’t communicate with the mongodb. I’ve port mapped it already for mongodb and the jupyter notebook to the host, and still doesn’t work.
This article is what I’m trying to do. Like in the article I’ve tried mapping the port towards the host’s interface ip in the docker bridge network for the mongodb and it still doesn’t work.
I’ve run another ubuntu container as well within the same bridge network. I tried using nmap. It seems that mongodb is accepting the connection but it isn’t responding back. So, the problem is mongodb can’t respond back to the sender. cause the port 27017 is listed as “filtered”.
I tried connecting it by using the mongodb’s ip directly that is listed in
docker network inspect bridge and it works. But, I’m trying to create a centralized ip so that all of my future containers can access and communicate with each other.
I’d want to avoid using
--link because I want a centralized ip that can access all the ports that is exposed from each containers. And I’m trying to avoid to use
--net=host as well.
You could try adding one container to the other’s namespace with --network container:[container_name]
Then they’ll share 172.17.0.X and you won’t have any issues with cross connection. You just have to publish all the ports you need on the first one.
Ohh, I just knew docker can do this. Thanks! your method works.
Though do you know why its doing this? The thing is, I did the exact same thing in osx and it worked. Currently the problem is happening in a linux environment.
Wait, now that I’ve think about it more. Isn’t that the same as linking at the start? or is it different?
and can I add other container’s namespace to a container that is running?
The article shared in the first post is from 9 years ago, its an outdated idea, which no one would use like that today.
The suggestion of @dockeracious is to join another container into the network namespace of another container. Basically it allows other container to use network interface of another container - even localhost is the same network interface in both containers.
I doubt that any of those two approaches is the most useful solution in your scenario.
Though, what’s wrong with creating a user defined docker network, attach the containers to this network, then leverage dns-bases service discovery for the communication amongst containers in this network? You just use the container-name and container-port to reach the service of another container. This is the common solution for your scenario.
Note: service discovery is not available for the default bridge network, this is why it needed container linking - with service discovery container linking is not required anymore. Container linking is deprecated anyway.
Yes, user-defined network was one of the options that I’ve looked at. I’m trying to find a way to have every service exposed into a single ip, so every service can be reachable just by specifying different ports. Without the need to resort to IP & DNS.
The reason that I’m trying to do it this way is because my containers aren’t actually from the official docker hub’s mongodb image. The containers I’m working with was made by someone else(Person A). Using user-defined network will work, but then I have another container from (Person A) running a kafka server. And from the documentation that (Person A) wrote, I have to add additional environment variables when running the container. And based on the documentation I have to put the host ip to make it work, so I’m trying to find a way to do that.
Though the method described in the article worked on my osx environment and not in linux environment. Are the two docker engines different, on how it works?
Oh and thank you for taking your time to help me out. Really appreciate it.
It’s been a couple of years since I worked with kafka, but even then it supported to address it’s peers or the zookeeper cluster by service name, and zookeper allowed the same for it’s nodes.
You might want to dig deeper into the image descriptions, as it feels like you try to tackle a problem that most likely doesn’t exist once you fully understand the application/service you try to use.
Rule of thumb: everything that can communicate over a network connection should be a separate container. Containers in the same network should use dns-based service discovery using container names and container ports to interact with each other.
I consider direct communication using the container ip as antipattern (unless it’s macvlan/ipvlan), as container ips will change at some point.
Docker on Linux? As in Docker-CE or Docker Desktop? Docker-CE aligns with each and every official docker documentation (except extensions). Docker-CE is the only Docker version that runs natively in ins natural habitat: A Linux system with a Linux kernel
Every(!) Docker Desktop version uses a utility vm (even on Linux) to run the docker engine – the vm brings the used Linux kernel to the table, but is not able to support host, ipvlan or macvlan networks.
I can’t tell you why it’s not working in your Linux environment. Choose the approach you see fit.