Docker Community Forums

Share and learn in the Docker community.

Found that need to use ucp-swarm-node-certs on UCP controller to enable multi-host networking


(Clarence Ho) #1

I tried to follow the guide below to enable multi-host networking on my UCP cluster:
http://ucp-beta-docs.s3-website-us-west-1.amazonaws.com/networking/

As described in the document, I added the following startup options to the docker daemon for connecting to the KV store:
–cluster-advertise em2:12376 --cluster-store etcd://ucp.skywidesoft.com:12379 --cluster-store-opt kv.cacertfile=/var/lib/docker/discovery_certs/ca.pem --cluster-store-opt kv.certfile=/var/lib/docker/discovery_certs/cert.pem --cluster-store-opt kv.keyfile=/var/lib/docker/discovery_certs/key.pem

However, I found that only engine nodes will works. On UCP controller, it will cause the following error:
Registering as “202.181.203.18:12376” in discovery failed: client: etcd cluster is unavailable or mis-configured.

After some researching and study the document about the certs that will be created by UCP, I found that in UCP controller, I need to change the cert location to use the certs and key files from the volume “ucp-swarm-node-certs”. I change the docker engine startup options as follows:
–cluster-advertise em2:12376 --cluster-store etcd://ucp.skywidesoft.com:12379 --cluster-store-opt kv.cacertfile=/var/lib/docker/ucp_discovery_certs/ca.pem --cluster-store-opt kv.certfile=/var/lib/docker/ucp_discovery_certs/cert.pem --cluster-store-opt kv.keyfile=/var/lib/docker/ucp_discovery_certs/key.pem

Note: I manually created the folder /var/lib/docker/ucp_discovery_certs and copy the files from ucp-swarm-node-certs into it.

After I did that, restart docker daemon, then I can successfully connect the docker to the UCP KV store container on the UCP controller without errors, and can create overlay network.

Any comments are welcome
Clarence


(Vivek Saraswat) #2

Hey Clarence, just another thing to try out: When using multi-host networking, you have to enter in all of the etcd IP addresses (master and replicas). This ensures that if the master etcd fails UCP/swarm knows to look up a replica’s k/v.

For example: --cluster-store etcd://[etcd_IP1:port],[etcd_IP2:port],[etcd_IP3:port]

It looks like we need to update the docs to reflect this. Try the above and let me know if it works for you.


(Clarence Ho) #3

Hi Vivek,

I understand that I can enter multiple KV addresses for HA purpose. I am now in a testing stage, and currently would like to test the multi-host networking within a simple 2 nodes Docker UCP setup. I just want to keep things simple and use the KV container in the controller node for multi-host networking.

I believe it’s a cert issue for connecting from docker daemon to the KV store on the Controller node. I simply found that using the discovery certs for connecting from docker daemon to etcd container (as stated in the online documentation) on controller only works on engine node.

Cheers
Clarence


(Jojojojo1234) #4

@skywideclarence you made my day buddy! thanks a zillion!

I assume you used my suggestion from Can't create networking

/var/lib/docker/discovery_certs/ path woks fine with UCP 0.5.0
BUT
it seems UCP 0.6.0 changed it to /var/lib/docker/ucp_discovery_certs/

I consider this to be bug in UCP 0.6.0 and I will go ahead and file it. Here we go: https://github.com/docker/ucp_lab/issues/6

I’ve tested your workaround and can verify it works. Cheers again!


(Vivek Saraswat) #5

Hey folks, thanks for pointing this out. I’ll see that it gets looked at.