One node on a 3 node-cluster is randomly down and restarts all its services

Hi! I have a cluster with 3 manager nodes. Two are in a data centre (New Jersey) and one is in another data centre (Los Angeles) from the same company. I have no problem with the nodes in the same data centre. However, the one alone is causing me trouble. All my services running on it randomly restart. I thought that it was overloading so I created another node in the same data centre and assigned it as a manager leaving my previous node as a worker (3 managers, 1 worker), also logs tell that there is a network connection issue so I did
docker swarm update --dispatcher-heartbeat 15s
hoping it would solve the problem, but it didn’t. I have tried disabling firewalls but nothing works. It happens at least once a day.
It seems that the nodes are unreachable, so docker declares them down, and the services restart. Can I do something about it? Like increasing a timeout or something, I have been with this problem for more than a month now.
Also I am monitoring my bad vps with ping, and they dont respond the ping at the same time that they are declared as down, however if they are not running docker nothing bad happens.

These are my logs from journalctl -u docker.service. Sorry if the log is too long (Balthasar is the name of the good manager)

22:04:44  level=info msg="NetworkDB stats balthasar(b6904450a99f) - netID:wa717ln1jiyh9uvlmtzi9fpr5 leaving:false netPeers:4 entries:10 Queue qLen:0 netMsg/s:0"
22:07:34  level=warning msg="memberlist: Was able to connect to fad00c2e2cfe over TCP but UDP probes failed, network may be misconfigured"
22:07:35  level=info msg="memberlist: Suspect 7fb3152ef19f has failed, no acks received"
22:07:36  level=warning msg="memberlist: Refuting a suspect message (from: fad00c2e2cfe)"
22:07:36  level=error msg="error receiving response" error="rpc error: code = Canceled desc = context canceled"
22:07:38  level=error msg="error receiving response" error="rpc error: code = Canceled desc = context canceled"
22:07:39  level=warning msg="memberlist: Failed fallback TCP ping: timeout 2s: read tcp ip_good_manager:34528->ip_bad_manager:7946: i/o timeout"
22:07:39  level=info msg="memberlist: Suspect fad00c2e2cfe has failed, no acks received"
22:07:42  level=info msg="memberlist: Suspect 7fb3152ef19f has failed, no acks received"
22:07:45  level=info msg="memberlist: Suspect fad00c2e2cfe has failed, no acks received"
22:07:48  level=info msg="memberlist: Suspect 7fb3152ef19f has failed, no acks received"
22:07:50  level=warning msg="memberlist: Refuting a suspect message (from: fad00c2e2cfe)"
22:07:52  level=warning msg="memberlist: Failed fallback TCP ping: timeout 2s: read tcp ip_good_manager:56656->ip_bad_worker:7946: i/o timeout"
22:07:52  level=info msg="memberlist: Suspect 7fb3152ef19f has failed, no acks received"
22:07:54  level=error msg="error receiving response" error="rpc error: code = Canceled desc = context canceled"
22:07:56  level=error msg="error receiving response" error="rpc error: code = Canceled desc = context canceled"
22:07:57  level=info msg="memberlist: Suspect fad00c2e2cfe has failed, no acks received"
22:07:58  level=error msg="error receiving response" error="rpc error: code = Canceled desc = context canceled"
22:07:59  level=info msg="memberlist: Suspect 7fb3152ef19f has failed, no acks received"
22:08:00  level=error msg="error receiving response" error="rpc error: code = Canceled desc = context canceled"
22:08:02  level=info msg="Node 7fb3152ef19f/ip_bad_worker, left gossip cluster"
22:08:02  level=info msg="Node 7fb3152ef19f change state NodeActive --> NodeFailed"
22:08:02  level=info msg="Node 7fb3152ef19f/ip_bad_worker, added to failed nodes list"
22:08:02  level=error msg="error receiving response" error="rpc error: code = Canceled desc = context canceled"
22:08:02  level=info msg="memberlist: Suspect 7fb3152ef19f has failed, no acks received"
22:08:02  level=warning msg="memberlist: Failed fallback TCP ping: timeout 2s: read tcp ip_good_manager:54486->ip_bad_manager:7946: i/o timeout"
22:08:04  level=error msg="error receiving response" error="rpc error: code = Canceled desc = context canceled"
22:08:06  level=error msg="error receiving response" error="rpc error: code = Canceled desc = context canceled"
22:08:07  level=info msg="memberlist: Suspect fad00c2e2cfe has failed, no acks received"
22:08:08  level=error msg="error receiving response" error="rpc error: code = Canceled desc = context canceled"
22:08:09  level=warning msg="memberlist: Was able to connect to fad00c2e2cfe over TCP but UDP probes failed, network may be misconfigured"
22:08:10  level=info msg="Node 7fb3152ef19f/ip_bad_worker, joined gossip cluster"
22:08:10  level=info msg="Node 7fb3152ef19f change state NodeFailed --> NodeActive"
22:08:10  level=error msg="error receiving response" error="rpc error: code = Canceled desc = context canceled"
22:08:11  level=info msg="memberlist: Suspect 7fb3152ef19f has failed, no acks received"
22:08:12  level=error msg="error receiving response" error="rpc error: code = Canceled desc = context canceled"
22:08:14  level=error msg="error receiving response" error="rpc error: code = Canceled desc = context canceled"
22:08:14  level=info msg="memberlist: Suspect 7fb3152ef19f has failed, no acks received"
22:08:15  level=warning msg="NetworkDB stats balthasar(b6904450a99f) - healthscore:1 (connectivity issues)"
22:08:16  level=error msg="error receiving response" error="rpc error: code = Canceled desc = context canceled"
22:08:18  level=error msg="error receiving response" error="rpc error: code = Canceled desc = context canceled"
22:08:18  level=info msg="memberlist: Suspect 7fb3152ef19f has failed, no acks received"
22:08:20  level=error msg="error receiving response" error="rpc error: code = Canceled desc = context canceled"
22:08:21  level=info msg="memberlist: Suspect 7fb3152ef19f has failed, no acks received"
22:08:22  level=error msg="error receiving response" error="rpc error: code = Canceled desc = context canceled"
22:08:23  level=info msg="1fe34a303190e623 [term: 612] received a MsgAppResp message with higher term from 440064512ab283f0 [term: 614]" module=raft node.id=rt4btww2cqun50a843uqz707b
22:08:23  level=info msg="1fe34a303190e623 became follower at term 614" module=raft node.id=rt4btww2cqun50a843uqz707b
22:08:23  level=info msg="raft.node: 1fe34a303190e623 lost leader 1fe34a303190e623 at term 614" module=raft node.id=rt4btww2cqun50a843uqz707b
22:08:23  level=error msg="soft state changed, node no longer a leader, resetting and cancelling all waits" raft_id=1fe34a303190e623
22:08:23  level=info msg="dispatcher stopping" method="(*Dispatcher).Stop" module=dispatcher node.id=rt4btww2cqun50a843uqz707b
22:08:23  level=info msg="dispatcher session dropped, marking node lwvzyn1wburd3djv6hj3yb4j1 down" method="(*Dispatcher).Session" node.id=lwvzyn1wburd3djv6hj3yb4j1 node.session=dc3wuiil2eu15wm40kerts72w
22:08:23  level=error msg="failed to remove node" error="rpc error: code = Aborted desc = dispatcher is stopped" method="(*Dispatcher).Session" node.id=lwvzyn1wburd3djv6hj3yb4j1 node.session=dc3wuiil2eu15wm40kerts72w
22:08:23  level=info msg="dispatcher session dropped, marking node rt4btww2cqun50a843uqz707b down" method="(*Dispatcher).Session" node.id=rt4btww2cqun50a843uqz707b node.session=nqrh8ef9lfd26b9ae3fdsve7x
22:08:23  level=error msg="failed to remove node" error="rpc error: code = Aborted desc = dispatcher is stopped" method="(*Dispatcher).Session" node.id=rt4btww2cqun50a843uqz707b node.session=nqrh8ef9lfd26b9ae3fdsve7x
22:08:23  level=info msg="leadership changed from not yet part of a raft cluster to no cluster leader" module=node node.id=rt4btww2cqun50a843uqz707b
22:08:23  level=error msg="agent: session failed" backoff=100ms error=EOF module=node/agent node.id=rt4btww2cqun50a843uqz707b
22:08:23  level=info msg="manager selected by agent for new session: { }" module=node/agent node.id=rt4btww2cqun50a843uqz707b
22:08:23  level=info msg="waiting 45.611199ms before registering session" module=node/agent node.id=rt4btww2cqun50a843uqz707b
22:08:24  level=error msg="error receiving response" error="rpc error: code = Canceled desc = context canceled"
22:08:26  level=warning msg="memberlist: Refuting a suspect message (from: fad00c2e2cfe)"
22:08:26  level=warning msg="Health check for container 5ee7b0729c5dbee3c81e0e0617b5d207a6aa7b9001095b42f4a61a5a2e1ce059 error: timed out starting health check for container 5ee7b0729c5dbee3c81e0e0617b5d207a6aa7b9001095b42f4a61a5a2e1ce059"
22:08:26  level=error msg="stream copy error: reading from a closed fifo"
22:08:26  level=error msg="stream copy error: reading from a closed fifo"
22:08:26  level=error msg="error receiving response" error="rpc error: code = Canceled desc = context canceled"
22:08:28  level=error msg="agent: session failed" backoff=300ms error="session initiation timed out" module=node/agent node.id=rt4btww2cqun50a843uqz707b
22:08:28  level=info msg="manager selected by agent for new session: { }" module=node/agent node.id=rt4btww2cqun50a843uqz707b
22:08:28  level=info msg="waiting 67.430941ms before registering session" module=node/agent node.id=rt4btww2cqun50a843uqz707b
22:08:28  level=error msg="error receiving response" error="rpc error: code = Canceled desc = context canceled"
22:08:29  level=info msg="memberlist: Suspect 7fb3152ef19f has failed, no acks received"
22:08:30  level=error msg="error receiving response" error="rpc error: code = Canceled desc = context canceled"
22:08:32  level=error msg="error receiving response" error="rpc error: code = Canceled desc = context canceled"
22:08:32  level=info msg="memberlist: Suspect 7fb3152ef19f has failed, no acks received"
22:08:33  level=info msg="1fe34a303190e623 [term: 614] received a MsgAppResp message with higher term from 440064512ab283f0 [term: 615]" module=raft node.id=rt4btww2cqun50a843uqz707b
22:08:33  level=info msg="1fe34a303190e623 became follower at term 615" module=raft node.id=rt4btww2cqun50a843uqz707b
22:08:33  level=error msg="agent: session failed" backoff=700ms error="session initiation timed out" module=node/agent node.id=rt4btww2cqun50a843uqz707b
22:08:33  level=info msg="manager selected by agent for new session: { }" module=node/agent node.id=rt4btww2cqun50a843uqz707b
22:08:33  level=info msg="waiting 454.997666ms before registering session" module=node/agent node.id=rt4btww2cqun50a843uqz707b
22:08:33  level=error msg="agent: session failed" backoff=1.5s error="rpc error: code = Unknown desc = The swarm does not have a leader. It's possible that too few managers are online. Make sure more than half of the managers are online." module=node/agent node.id=rt4btww2cqun50a843uqz707b
22:08:33  level=info msg="manager selected by agent for new session: { }" module=node/agent node.id=rt4btww2cqun50a843uqz707b
22:08:33  level=info msg="waiting 1.322367422s before registering session" module=node/agent node.id=rt4btww2cqun50a843uqz707b
22:08:35  level=error msg="agent: session failed" backoff=3.1s error="rpc error: code = Unknown desc = The swarm does not have a leader. It's possible that too few managers are online. Make sure more than half of the managers are online." module=node/agent node.id=rt4btww2cqun50a843uqz707b
22:08:35  level=info msg="manager selected by agent for new session: { }" module=node/agent node.id=rt4btww2cqun50a843uqz707b
22:08:35  level=info msg="waiting 2.410015662s before registering session" module=node/agent node.id=rt4btww2cqun50a843uqz707b
22:08:37  level=error msg="agent: session failed" backoff=6.3s error="rpc error: code = Unknown desc = The swarm does not have a leader. It's possible that too few managers are online. Make sure more than half of the managers are online." module=node/agent node.id=rt4btww2cqun50a843uqz707b
22:08:37  level=info msg="manager selected by agent for new session: { }" module=node/agent node.id=rt4btww2cqun50a843uqz707b
22:08:37  level=info msg="waiting 2.646439535s before registering session" module=node/agent node.id=rt4btww2cqun50a843uqz707b
22:08:39  level=info msg="1fe34a303190e623 [term: 615] ignored a MsgVote message with lower term from 3ab8b21d102ef6d2 [term: 613]" module=raft node.id=rt4btww2cqun50a843uqz707b
22:08:40  level=error msg="agent: session failed" backoff=8s error="rpc error: code = Unknown desc = The swarm does not have a leader. It's possible that too few managers are online. Make sure more than half of the managers are online." module=node/agent node.id=rt4btww2cqun50a843uqz707b
22:08:40  level=info msg="manager selected by agent for new session: { }" module=node/agent node.id=rt4btww2cqun50a843uqz707b
22:08:40  level=info msg="waiting 5.984853659s before registering session" module=node/agent node.id=rt4btww2cqun50a843uqz707b
22:08:41  level=info msg="1fe34a303190e623 [term: 615] received a MsgVote message with higher term from 440064512ab283f0 [term: 616]" module=raft node.id=rt4btww2cqun50a843uqz707b
22:08:41  level=info msg="1fe34a303190e623 became follower at term 616" module=raft node.id=rt4btww2cqun50a843uqz707b
22:08:41  level=info msg="1fe34a303190e623 [logterm: 612, index: 134503, vote: 0] rejected MsgVote from 440064512ab283f0 [logterm: 612, index: 134488] at term 616" module=raft node.id=rt4btww2cqun50a843uqz707b
22:08:46  level=error msg="agent: session failed" backoff=8s error="rpc error: code = Unknown desc = The swarm does not have a leader. It's possible that too few managers are online. Make sure more than half of the managers are online." module=node/agent node.id=rt4btww2cqun50a843uqz707b
22:08:46  level=info msg="manager selected by agent for new session: { }" module=node/agent node.id=rt4btww2cqun50a843uqz707b
22:08:46  level=info msg="waiting 20.074513ms before registering session" module=node/agent node.id=rt4btww2cqun50a843uqz707b
22:08:46  level=error msg="agent: session failed" backoff=8s error="rpc error: code = Unknown desc = The swarm does not have a leader. It's possible that too few managers are online. Make sure more than half of the managers are online." module=node/agent node.id=rt4btww2cqun50a843uqz707b
22:08:46  level=info msg="manager selected by agent for new session: { }" module=node/agent node.id=rt4btww2cqun50a843uqz707b
22:08:46  level=info msg="waiting 2.210326974s before registering session" module=node/agent node.id=rt4btww2cqun50a843uqz707b
22:08:48  level=error msg="agent: session failed" backoff=8s error="rpc error: code = Unknown desc = The swarm does not have a leader. It's possible that too few managers are online. Make sure more than half of the managers are online." module=node/agent node.id=rt4btww2cqun50a843uqz707b
22:08:48  level=info msg="manager selected by agent for new session: { }" module=node/agent node.id=rt4btww2cqun50a843uqz707b
22:08:48  level=info msg="waiting 1.217940596s before registering session" module=node/agent node.id=rt4btww2cqun50a843uqz707b
22:08:48  level=error msg="Handler for GET /v1.24/services returned error: rpc error: code = DeadlineExceeded desc = context deadline exceeded" spanID=923ba22279511f9c traceID=a6987c92003e24d239ad800bb57e0ac2
22:08:49  level=error msg="agent: session failed" backoff=8s error="rpc error: code = Unknown desc = The swarm does not have a leader. It's possible that too few managers are online. Make sure more than half of the managers are online." module=node/agent node.id=rt4btww2cqun50a843uqz707b
22:08:49  level=info msg="manager selected by agent for new session: { }" module=node/agent node.id=rt4btww2cqun50a843uqz707b
22:08:49  level=info msg="waiting 4.123352829s before registering session" module=node/agent node.id=rt4btww2cqun50a843uqz707b
22:08:49  level=error msg="Handler for GET /v1.24/services returned error: rpc error: code = Unknown desc = The swarm does not have a leader. It's possible that too few managers are online. Make sure more than half of the managers are online." spanID=6f05224718c61076 traceID=fddc7852f5804714b800b8ad11b43725
22:08:50  level=error msg="Handler for GET /v1.24/services returned error: rpc error: code = Unknown desc = The swarm does not have a leader. It's possible that too few managers are online. Make sure more than half of the managers are online." spanID=ac7a25df2a30a807 traceID=698b681eee3c6f25433f45192b963cb6
22:08:50  level=info msg="1fe34a303190e623 [term: 616] received a MsgVote message with higher term from 3ab8b21d102ef6d2 [term: 617]" module=raft node.id=rt4btww2cqun50a843uqz707b
22:08:50  level=info msg="1fe34a303190e623 became follower at term 617" module=raft node.id=rt4btww2cqun50a843uqz707b
22:08:50  level=info msg="1fe34a303190e623 [logterm: 612, index: 134503, vote: 0] cast MsgVote for 3ab8b21d102ef6d2 [logterm: 612, index: 134503] at term 617" module=raft node.id=rt4btww2cqun50a843uqz707b
22:08:50  level=info msg="raft.node: 1fe34a303190e623 elected leader 3ab8b21d102ef6d2 at term 617" module=raft node.id=rt4btww2cqun50a843uqz707b
22:08:53  level=info msg="initialized VXLAN UDP port to 4789 " module=node node.id=rt4btww2cqun50a843uqz707b
22:08:56  level=warning msg="rmServiceBinding 286e9ed66c5e08457eab4ae78b8f4beec1feacba5c64760bb810bb312deb193b possible transient state ok:false entries:0 set:false "
22:09:01  level=warning msg="Entry was not in db: nid:egzju64md3tehva0l7v5zvnht eid:8a2cff9722926e5fba895574e611004b8fa985b431b4507d239e757119f4c975 peerIP:10.0.3.8 peerMac:02:42:0a:00:03:08 isLocal:false vtep:ip_bad_manager"
22:09:01  level=warning msg="Peer delete operation failed" error="could not delete fdb entry for nid:egzju64md3tehva0l7v5zvnht eid:8a2cff9722926e5fba895574e611004b8fa985b431b4507d239e757119f4c975 into the sandbox:Search neighbor failed for IP ip_bad_manager, mac 02:42:0a:00:03:08, present in db:false"
22:09:02  level=warning msg="rmServiceBinding 6146728b1595a4096d1cd0a25491907fab357c138028b1cbece95ebaf830e9cf possible transient state ok:false entries:0 set:false "
22:09:03  level=warning msg="rmServiceBinding c9bb7de13e64c9188da1d0c2130be22bde8dd43fa5a1682df35ecebe04947320 possible transient state ok:false entries:0 set:false "
22:09:03  level=warning msg="Entry was not in db: nid:egzju64md3tehva0l7v5zvnht eid:55eb4efab0a1c6dfe2537de0f5ec10271c1585dc0f2d84d3b0c464af56bdd297 peerIP:10.0.3.8 peerMac:02:42:0a:00:03:08 isLocal:false vtep:ip_bad_manager"
22:09:03  level=warning msg="Peer delete operation failed" error="could not delete fdb entry for nid:egzju64md3tehva0l7v5zvnht eid:55eb4efab0a1c6dfe2537de0f5ec10271c1585dc0f2d84d3b0c464af56bdd297 into the sandbox:Search neighbor failed for IP ip_bad_manager, mac 02:42:0a:00:03:08, present in db:false"
22:09:05  level=warning msg="Entry was not in db: nid:egzju64md3tehva0l7v5zvnht eid:ce81fffa23efd77465afa10c7ebf1927bc0c048ecc35b90b735f348d7c83079b peerIP:10.0.3.8 peerMac:02:42:0a:00:03:08 isLocal:false vtep:ip_bad_manager"
22:09:05  level=warning msg="Peer delete operation failed" error="could not delete fdb entry for nid:egzju64md3tehva0l7v5zvnht eid:ce81fffa23efd77465afa10c7ebf1927bc0c048ecc35b90b735f348d7c83079b into the sandbox:Search neighbor failed for IP ip_bad_manager, mac 02:42:0a:00:03:08, present in db:false"
22:09:07  level=warning msg="Entry was not in db: nid:egzju64md3tehva0l7v5zvnht eid:5b46da5aad17ee8486eb1d87f6eb9035a804a20be96d4d4950731411842efe08 peerIP:10.0.3.15 peerMac:02:42:0a:00:03:0f isLocal:false vtep:ip_bad_manager"
22:09:07  level=warning msg="Peer delete operation failed" error="could not delete fdb entry for nid:egzju64md3tehva0l7v5zvnht eid:5b46da5aad17ee8486eb1d87f6eb9035a804a20be96d4d4950731411842efe08 into the sandbox:Search neighbor failed for IP ip_bad_manager, mac 02:42:0a:00:03:0f, present in db:false"
22:09:12  level=warning msg="Entry was not in db: nid:egzju64md3tehva0l7v5zvnht eid:4fbf418c02bd9dd03b320a7ec4cce40e5f07ee30e0e2c574bf31909954a25652 peerIP:10.0.3.17 peerMac:02:42:0a:00:03:11 isLocal:false vtep:ip_bad_manager"
22:09:12  level=warning msg="Peer delete operation failed" error="could not delete fdb entry for nid:egzju64md3tehva0l7v5zvnht eid:4fbf418c02bd9dd03b320a7ec4cce40e5f07ee30e0e2c574bf31909954a25652 into the sandbox:Search neighbor failed for IP ip_bad_manager, mac 02:42:0a:00:03:11, present in db:false"
22:09:44  level=info msg="NetworkDB stats balthasar(b6904450a99f) - netID:wa717ln1jiyh9uvlmtzi9fpr5 leaving:false netPeers:4 entries:12 Queue qLeMsg/s:0"

Swarm (and Kubernetes) uses the RAFT consensus algorithm for cluster management and state replication, which relies on low latency networks.

From my experience running nodes spread across availability zones of a cloud provider works reliable, while nodes spread across regions does not.

You might be better off running the cluster in a single datacenter, and use the vps instance as a standalone instance.

This topic has been discussed a couple of times in the past. You might find posts with additional insights - but none of them will give you an answer to make it work like you try to do.