Docker Community Forums

Share and learn in the Docker community.

Unable to reconcile state of Kubernetes addons component

Thank you, 3.1.2 solved for me too.

Hello Everyone,

I just got back from vacation / sickness(flu). I will work on this issue today and see if your recommendations work.

I tried to use 3.1.2 I got the same error.

I can pull images with no problem. I have all the UCP images. I even created a http-proxy.conf file.

Here are the logs

ERRO[0653] Unable to successfully setup local node. Run “docker logs ucp-reconcile” for more details
FATA[0653] reconcile exited with non-zero status: 1
[root@ucp-test ~]# docker logs ucp-reconcile
{“level”:“info”,“msg”:“Configuring node as agent with the following SANs: [kubernetes.default.svc kubernetes.default.svc.cluster.local 127.0.0.1 localhost proxy.local 172.17.0.1 kubernetes kubernetes.default ucp-controller.kube-system.svc.cluster.local compose-api.kube-system.svc 10.0.0.1]”,“time”:“2019-01-14T15:15:25Z”}
{“level”:“info”,“msg”:“Reconciling state of component Docker Proxy”,“time”:“2019-01-14T15:15:25Z”}
{“level”:“info”,“msg”:“Reconciling state of component Certificates”,“time”:“2019-01-14T15:15:26Z”}
{“level”:“info”,“msg”:“Reconciling state of component Concurrent [Client CA Cluster CA Analytics Kubelet Kubernetes Proxy legacymetrics Concurrent [ucp-agent-service ucp-agent-win-service ucp-agent-s390x-service] interlockservice [etcd Exclusive RethinkDB Concurrent [eNZi Secret Kubernetes API Server] Concurrent [Swarm-Classic Manager Concurrent [eNZi API x86_64 service eNZi API s390x service] Concurrent [eNZi Worker x86_64 service eNZi Worker s390x service] Kubernetes Scheduler Kubernetes Controller Manager]]]”,“time”:“2019-01-14T15:15:28Z”}
{“level”:“info”,“msg”:“Reconciling components [etcd Exclusive RethinkDB Concurrent [eNZi Secret Kubernetes API Server] Concurrent [Swarm-Classic Manager Concurrent [eNZi API x86_64 service eNZi API s390x service] Concurrent [eNZi Worker x86_64 service eNZi Worker s390x service] Kubernetes Scheduler Kubernetes Controller Manager]]”,“time”:“2019-01-14T15:15:28Z”}
{“level”:“info”,“msg”:“Reconciling state of component etcd”,“time”:“2019-01-14T15:15:28Z”}
{“level”:“info”,“msg”:“successfully reconciled state of legacymetrics component”,“time”:“2019-01-14T15:15:28Z”}
{“level”:“info”,“msg”:“successfully reconciled state of ucp-agent-s390x-service component. This component will enable UCP on s390x linux nodes if they are added to the cluster”,“time”:“2019-01-14T15:15:28Z”}
{“level”:“info”,“msg”:“successfully reconciled state of interlockservice component. This component will enable the interlock load balancing solution on the UCP cluster.”,“time”:“2019-01-14T15:15:28Z”}
{“level”:“info”,“msg”:“successfully reconciled state of ucp-agent-service component. This component will enable UCP on x86_64 linux nodes if they are added to the cluster”,“time”:“2019-01-14T15:15:28Z”}
{“level”:“info”,“msg”:“successfully reconciled state of ucp-agent-win-service component. This component will enable UCP on x86_64 windows nodes if they are added to the cluster”,“time”:“2019-01-14T15:15:28Z”}
{“level”:“info”,“msg”:“successfully reconciled state of Concurrent [ucp-agent-service ucp-agent-win-service ucp-agent-s390x-service] component”,“time”:“2019-01-14T15:15:28Z”}
{“level”:“warning”,“msg”:“Error when accessing /var/lib/docker/ucp/ucp-kv/member/snap: lstat /var/lib/docker/ucp/ucp-kv/member/snap: no such file or directory”,“time”:“2019-01-14T15:15:28Z”}
{“level”:“warning”,“msg”:“Error when accessing /var/lib/docker/ucp/ucp-kv/datav3/member/snap: lstat /var/lib/docker/ucp/ucp-kv/datav3/member/snap: no such file or directory”,“time”:“2019-01-14T15:15:28Z”}
{“level”:“info”,“msg”:“successfully reconciled state of Analytics component”,“time”:“2019-01-14T15:15:28Z”}
{“level”:“info”,“msg”:“Starting up ucp-kube-proxy container”,“time”:“2019-01-14T15:15:28Z”}
{“level”:“info”,“msg”:“Starting up ucp-kubelet container”,“time”:“2019-01-14T15:15:28Z”}
{“level”:“info”,“msg”:“successfully reconciled state of Kubernetes Proxy component”,“time”:“2019-01-14T15:15:28Z”}
{“level”:“info”,“msg”:“successfully reconciled state of Kubelet component”,“time”:“2019-01-14T15:15:29Z”}
{“level”:“info”,“msg”:“successfully reconciled state of Client CA component”,“time”:“2019-01-14T15:15:29Z”}
{“level”:“info”,“msg”:“successfully reconciled state of Cluster CA component”,“time”:“2019-01-14T15:15:29Z”}
{“level”:“info”,“msg”:“Reconciling state of component Exclusive RethinkDB”,“time”:“2019-01-14T15:15:31Z”}
{“level”:“info”,“msg”:“Creating the UCP database”,“time”:“2019-01-14T15:15:36Z”}
{“level”:“info”,“msg”:“Waiting for database ucp to exist”,“time”:“2019-01-14T15:15:36Z”}
{“level”:“info”,“msg”:“Creating initial collections”,“time”:“2019-01-14T15:15:37Z”}
{“level”:“info”,“msg”:“Reconciling state of component Concurrent [eNZi Secret Kubernetes API Server]”,“time”:“2019-01-14T15:15:37Z”}
{“level”:“info”,“msg”:“successfully reconciled state of eNZi Secret component”,“time”:“2019-01-14T15:15:37Z”}
{“level”:“info”,“msg”:“Created a new Kubernetes master config and stored in etcd”,“time”:“2019-01-14T15:15:37Z”}
{“level”:“info”,“msg”:“Starting up ucp-kube-apiserver container”,“time”:“2019-01-14T15:15:37Z”}
{“level”:“info”,“msg”:“successfully reconciled state of Kubernetes API Server component”,“time”:“2019-01-14T15:15:38Z”}
{“level”:“info”,“msg”:“Reconciling state of component Concurrent [Swarm-Classic Manager Concurrent [eNZi API x86_64 service eNZi API s390x service] Concurrent [eNZi Worker x86_64 service eNZi Worker s390x service] Kubernetes Scheduler Kubernetes Controller Manager]”,“time”:“2019-01-14T15:15:38Z”}
{“level”:“info”,“msg”:“Service ucp-auth-api-s390x is desired to be running but is not running”,“time”:“2019-01-14T15:15:38Z”}
{“level”:“info”,“msg”:“Starting up ucp-kube-controller-manager container”,“time”:“2019-01-14T15:15:38Z”}
{“level”:“info”,“msg”:“Starting up ucp-kube-scheduler container”,“time”:“2019-01-14T15:15:38Z”}
{“level”:“info”,“msg”:“Service ucp-auth-api is desired to be running but is not running”,“time”:“2019-01-14T15:15:38Z”}
{“level”:“info”,“msg”:“Service ucp-auth-worker-s390x is desired to be running but is not running”,“time”:“2019-01-14T15:15:38Z”}
{“level”:“info”,“msg”:“Service ucp-auth-worker is desired to be running but is not running”,“time”:“2019-01-14T15:15:38Z”}
{“level”:“info”,“msg”:“successfully reconciled state of eNZi API s390x service component. This component will enable eNZi API servers on s390x linux nodes if they are added to the cluster”,“time”:“2019-01-14T15:15:38Z”}
{“level”:“info”,“msg”:“successfully reconciled state of eNZi Worker s390x service component. This component will enable eNZi workers on s390x linux nodes if they are added to the cluster”,“time”:“2019-01-14T15:15:38Z”}
{“level”:“info”,“msg”:“successfully reconciled state of Kubernetes Scheduler component”,“time”:“2019-01-14T15:15:38Z”}
{“level”:“info”,“msg”:“successfully reconciled state of Kubernetes Controller Manager component”,“time”:“2019-01-14T15:15:39Z”}
{“level”:“info”,“msg”:“successfully reconciled state of Swarm-Classic Manager component”,“time”:“2019-01-14T15:15:39Z”}
{“level”:“info”,“msg”:“successfully reconciled state of eNZi Worker x86_64 service component. This component will enable eNZi workers on x86_64 linux nodes if they are added to the cluster”,“time”:“2019-01-14T15:15:43Z”}
{“level”:“info”,“msg”:“successfully reconciled state of Concurrent [eNZi Worker x86_64 service eNZi Worker s390x service] component”,“time”:“2019-01-14T15:15:43Z”}
{“level”:“info”,“msg”:“successfully reconciled state of eNZi API x86_64 service component. This component will enable eNZi API servers on x86_64 linux nodes if they are added to the cluster”,“time”:“2019-01-14T15:15:45Z”}
{“level”:“info”,“msg”:“successfully reconciled state of Concurrent [eNZi API x86_64 service eNZi API s390x service] component”,“time”:“2019-01-14T15:15:45Z”}
{“level”:“info”,“msg”:“successfully reconciled state of [etcd Exclusive RethinkDB Concurrent [eNZi Secret Kubernetes API Server] Concurrent [Swarm-Classic Manager Concurrent [eNZi API x86_64 service eNZi API s390x service] Concurrent [eNZi Worker x86_64 service eNZi Worker s390x service] Kubernetes Scheduler Kubernetes Controller Manager]] component”,“time”:“2019-01-14T15:15:45Z”}
{“level”:“info”,“msg”:“Reconciling state of component UCP Controller”,“time”:“2019-01-14T15:15:45Z”}
{“level”:“info”,“msg”:“Reconciling state of component Kubernetes CNI Plugin”,“time”:“2019-01-14T15:15:54Z”}
{“level”:“info”,“msg”:“Deploying addon calico”,“time”:“2019-01-14T15:15:54Z”}
{“level”:“info”,“msg”:“Waiting for kubernetes node ucp-test.novalocal to become ready”,“time”:“2019-01-14T15:16:03Z”}
{“level”:“info”,“msg”:“Addon calico was deployed successfully”,“time”:“2019-01-14T15:16:03Z”}
{“level”:“info”,“msg”:“Reconciling state of component Kubernetes addons”,“time”:“2019-01-14T15:16:03Z”}
{“level”:“info”,“msg”:“Deploying addon kubedns”,“time”:“2019-01-14T15:16:03Z”}
{“level”:“info”,“msg”:“Addon kubedns was deployed successfully”,“time”:“2019-01-14T15:16:05Z”}
{“level”:“info”,“msg”:“Deploying addon ucp-controller”,“time”:“2019-01-14T15:16:05Z”}
{“level”:“info”,“msg”:“Deploying addon ucp-metrics”,“time”:“2019-01-14T15:16:06Z”}
{“level”:“info”,“msg”:“Deploying addon compose”,“time”:“2019-01-14T15:16:07Z”}
{“level”:“info”,“msg”:“Checking installation state”,“time”:“2019-01-14T15:16:07Z”}
{“level”:“info”,“msg”:“Install image with tag “e9c8673f4fd3df10a90e1542aa9bfde8e300f582” in namespace “kube-system””,“time”:“2019-01-14T15:16:07Z”}
{“level”:“fatal”,“msg”:“unable to reconcile state of Kubernetes addons component: error while deploying addon compose: context deadline exceeded”,“time”:“2019-01-14T15:26:04Z”}

1 Like

I have my suspicions that it is a timing issue. The wait time on the reconciliation job is too short. Any chance you can try it on a speedier VM?

I am having the same issue with Ubuntu 16.04 LTS with UCP versions 3.1.0, 3.1.1, 3.1.2 .

unable to reconcile state of Kubernetes addons component: error while deploying addon compose: context deadline exceeded
ucp-reconcile container exited with status code: 1
ERRO[0667] Unable to successfully setup local node. Run "docker logs ucp-reconcile" for more details 
FATA[0667] reconcile exited with non-zero status: 1

Clean all docker containers and images.
You can use this:

sudo docker swarm leave --force
sudo docker stop $(sudo docker ps -aq)
sudo docker rm $(sudo docker ps -aq) --force
sudo docker rmi $(sudo docker images -aq) --force
sudo docker network prune
sudo docker system prune --force

Use version 3.0.7 for UCP.

I was able to get its up. Finally.

2 Likes

Can you upgrade it to 3.1.2?

Once you have the UI, you can do it from there. And yeah you can upgrade.

I know that. I mean were you able to upgrade once you had 3.0.7 working?

Yes I was able to upgrade. Till 3.1.1 , 3.1.2 dint connect to port 443.
The other problem is, even if I got it running, the node was always unhealthy and down.

I couldn’t connect another node into the swarm network.

worldwire, I used a similar workaround except I installed 3.1.0 and then upgraded. It worked fine after that. No issues joining the nodes. Is everyone here experiencing this issue attempting to use cloud integration?

1 Like

I will try the same thing and will let you know how it works

I had to download docker ucp 3.0.7 and upgrade to 3.1.2 . 3.1.2, 3.1.1, 3.1.0 never worked BUT my node has been down the whole time.
The message is
Message

Calico-node pod is unhealthy: calico-node pod is in phase Pending, unexpected calico-node pod condition Ready:False, calico-node pod container calico-node is not running with state {&ContainerStateWaiting{Reason:CreateContainerConfigError,Message:host IP unknown; known addresses: ,} nil nil}

1 Like

This is a most frustrating of problems because it’s work-stopping, appears to have no common cause (for me, setting cloud-provider flag caused this issue. Running without was fine on every version) and there doesn’t seem to be a reliable workaround.

Ok I think I figured out the problem…

If you are behind a proxy, use the Nodes private IP address for your UCP Node Address.

I was using the floating IP Address and it messed up with our proxy.

Just don’t specify the host_ip_address in the command. The host ip address is automatically discovered. If you still have an issue with ucp v3.1.2, try the ucp v3.1.3.

I was trying to install the version 3.1.4 over AWS EC2 instance but doesn’t work. I’d tried again with the verision 3.0.7 and works fine. Somebody got today a solution about it ? Thanks

Thanks . Your steps helped me as well.