Cluster in swarm and individual container out of swarm

I created a swarm in data center A and deployed a 3 container (mongo1, mongo2 and mongo3) cluster of mongodb . Working like a charm one mongo1 being a primary and other two as secondary.

I also created a swarm in data center B, here I deployed single container (mongo10) of mongodb and expose port on 27019

In swarm A I get to master node of mongodb and added mongo10 with rs.add command. Command successful but state is not secondary but connected.

In swarm B of mongo10 I see in the log files that mongo10 does received connection from primary node but don’t know how to get back to primary or any of the secondary nodes. There is not configure of mongo cluster for mongo10.

How can I communicate to mongo1, mongo2 and mongo3 from mongo10. Any idea?

Just of curriosity: if you take docker out of the equation, would mongodb’s replication even work accross datacetenters? I have no idea which consensus they use, but generally consensus algorithms require low latency network connections and tend to behave oddly if a time drift commes into play.

Depending on the consenus algorithms beeing able to establish a dataconnection where a master is able to access the remote node, won’t be sufficient. Usually each node needs to be able to to communicate directly with each other node. Did you forget to mention that you publised ports for the nodes in DC A as well? Or didn’t you simply not publish ports?

I am quite confident that the mongo db documention addresses those regards. Once you know how the conesus in monogo works, you should be able to map the requirements to your containerized instances.