Unable to reconcile state of Kubernetes addons component

I have installed 18.09 on a new RHEL V7.6 image. I am trying to install UCP 3.1.0

I get this error
ERRO[0676] Unable to successfully setup local node. Run “docker logs ucp-reconcile” for more details
FATA[0676] reconcile exited with non-zero status: 1

When I run docker logs ucp-reconcile i get this
{“level”:“info”,“msg”:“Deploying addon kubedns”,“time”:“2018-12-12T17:30:59Z”}
{“level”:“info”,“msg”:“Addon kubedns was deployed successfully”,“time”:“2018-12-12T17:31:01Z”}
{“level”:“info”,“msg”:“Deploying addon ucp-controller”,“time”:“2018-12-12T17:31:01Z”}
{“level”:“info”,“msg”:“Deploying addon ucp-metrics”,“time”:“2018-12-12T17:31:02Z”}
{“level”:“info”,“msg”:“Deploying addon compose”,“time”:“2018-12-12T17:31:03Z”}
{“level”:“info”,“msg”:“Checking installation state”,“time”:“2018-12-12T17:31:03Z”}
{“level”:“info”,“msg”:“Install image with tag “f1506355e297cd139205db6253d60227306fd699” in namespace “kube-system””,“time”:“2018-12-12T17:31:03Z”}
{“level”:“fatal”,“msg”:“unable to reconcile state of Kubernetes addons component: error while deploying addon compose: context deadline exceeded”,“time”:“2018-12-12T17:40:59Z”}

I am not sure why. Nothing online yet about this error.

1 Like

I was getting the same issue on our swarm while updating UCP.
For us, we weren’t able to pull from docker hub either.
When we updated the docker environment variable to pull from behind a proxy, that effectively solved the docker pull problem. I’m not sure if this is a correct solution but being able to connect to dockerhub solved this issue for us.

Are you installing the latest image or specifying 3.1.1 as the version? If it’s the latter, try installing 3.1.0 and updating from within the UCP.

That is how the issue was resolved for me. I couldn’t get 3.1.1 to install successfully. I got the same error you did. Installing 3.1.0 explicitly and then upgrading worked fine.

Hi, I’m facing the same issue. Did you manage to find a solution? Please advise.

I’m installing UCP 3.1.1 on RHEL 7.5 (on Azure VMs). I even tried 3.1.0, but still the same error.

Thanks & Regards,
Habib

Same issue as everyone else. 3.1.0 and 3.1.1 both broken. Going back to 3.0.7 works, but then broke with all kinds of Kubernetes errors trying to upgrade from 3.0.7 to 3.1.1.

I’ve got a case open with Support, if I get any answers I’ll post back.

3.1.2 has solved the problem for me.

Thank you, 3.1.2 solved for me too.

Hello Everyone,

I just got back from vacation / sickness(flu). I will work on this issue today and see if your recommendations work.

I tried to use 3.1.2 I got the same error.

I can pull images with no problem. I have all the UCP images. I even created a http-proxy.conf file.

Here are the logs

ERRO[0653] Unable to successfully setup local node. Run “docker logs ucp-reconcile” for more details
FATA[0653] reconcile exited with non-zero status: 1
[root@ucp-test ~]# docker logs ucp-reconcile
{“level”:“info”,“msg”:“Configuring node as agent with the following SANs: [kubernetes.default.svc kubernetes.default.svc.cluster.local 127.0.0.1 localhost proxy.local 172.17.0.1 kubernetes kubernetes.default ucp-controller.kube-system.svc.cluster.local compose-api.kube-system.svc 10.0.0.1]”,“time”:“2019-01-14T15:15:25Z”}
{“level”:“info”,“msg”:“Reconciling state of component Docker Proxy”,“time”:“2019-01-14T15:15:25Z”}
{“level”:“info”,“msg”:“Reconciling state of component Certificates”,“time”:“2019-01-14T15:15:26Z”}
{“level”:“info”,“msg”:“Reconciling state of component Concurrent [Client CA Cluster CA Analytics Kubelet Kubernetes Proxy legacymetrics Concurrent [ucp-agent-service ucp-agent-win-service ucp-agent-s390x-service] interlockservice [etcd Exclusive RethinkDB Concurrent [eNZi Secret Kubernetes API Server] Concurrent [Swarm-Classic Manager Concurrent [eNZi API x86_64 service eNZi API s390x service] Concurrent [eNZi Worker x86_64 service eNZi Worker s390x service] Kubernetes Scheduler Kubernetes Controller Manager]]]”,“time”:“2019-01-14T15:15:28Z”}
{“level”:“info”,“msg”:“Reconciling components [etcd Exclusive RethinkDB Concurrent [eNZi Secret Kubernetes API Server] Concurrent [Swarm-Classic Manager Concurrent [eNZi API x86_64 service eNZi API s390x service] Concurrent [eNZi Worker x86_64 service eNZi Worker s390x service] Kubernetes Scheduler Kubernetes Controller Manager]]”,“time”:“2019-01-14T15:15:28Z”}
{“level”:“info”,“msg”:“Reconciling state of component etcd”,“time”:“2019-01-14T15:15:28Z”}
{“level”:“info”,“msg”:“successfully reconciled state of legacymetrics component”,“time”:“2019-01-14T15:15:28Z”}
{“level”:“info”,“msg”:“successfully reconciled state of ucp-agent-s390x-service component. This component will enable UCP on s390x linux nodes if they are added to the cluster”,“time”:“2019-01-14T15:15:28Z”}
{“level”:“info”,“msg”:“successfully reconciled state of interlockservice component. This component will enable the interlock load balancing solution on the UCP cluster.”,“time”:“2019-01-14T15:15:28Z”}
{“level”:“info”,“msg”:“successfully reconciled state of ucp-agent-service component. This component will enable UCP on x86_64 linux nodes if they are added to the cluster”,“time”:“2019-01-14T15:15:28Z”}
{“level”:“info”,“msg”:“successfully reconciled state of ucp-agent-win-service component. This component will enable UCP on x86_64 windows nodes if they are added to the cluster”,“time”:“2019-01-14T15:15:28Z”}
{“level”:“info”,“msg”:“successfully reconciled state of Concurrent [ucp-agent-service ucp-agent-win-service ucp-agent-s390x-service] component”,“time”:“2019-01-14T15:15:28Z”}
{“level”:“warning”,“msg”:“Error when accessing /var/lib/docker/ucp/ucp-kv/member/snap: lstat /var/lib/docker/ucp/ucp-kv/member/snap: no such file or directory”,“time”:“2019-01-14T15:15:28Z”}
{“level”:“warning”,“msg”:“Error when accessing /var/lib/docker/ucp/ucp-kv/datav3/member/snap: lstat /var/lib/docker/ucp/ucp-kv/datav3/member/snap: no such file or directory”,“time”:“2019-01-14T15:15:28Z”}
{“level”:“info”,“msg”:“successfully reconciled state of Analytics component”,“time”:“2019-01-14T15:15:28Z”}
{“level”:“info”,“msg”:“Starting up ucp-kube-proxy container”,“time”:“2019-01-14T15:15:28Z”}
{“level”:“info”,“msg”:“Starting up ucp-kubelet container”,“time”:“2019-01-14T15:15:28Z”}
{“level”:“info”,“msg”:“successfully reconciled state of Kubernetes Proxy component”,“time”:“2019-01-14T15:15:28Z”}
{“level”:“info”,“msg”:“successfully reconciled state of Kubelet component”,“time”:“2019-01-14T15:15:29Z”}
{“level”:“info”,“msg”:“successfully reconciled state of Client CA component”,“time”:“2019-01-14T15:15:29Z”}
{“level”:“info”,“msg”:“successfully reconciled state of Cluster CA component”,“time”:“2019-01-14T15:15:29Z”}
{“level”:“info”,“msg”:“Reconciling state of component Exclusive RethinkDB”,“time”:“2019-01-14T15:15:31Z”}
{“level”:“info”,“msg”:“Creating the UCP database”,“time”:“2019-01-14T15:15:36Z”}
{“level”:“info”,“msg”:“Waiting for database ucp to exist”,“time”:“2019-01-14T15:15:36Z”}
{“level”:“info”,“msg”:“Creating initial collections”,“time”:“2019-01-14T15:15:37Z”}
{“level”:“info”,“msg”:“Reconciling state of component Concurrent [eNZi Secret Kubernetes API Server]”,“time”:“2019-01-14T15:15:37Z”}
{“level”:“info”,“msg”:“successfully reconciled state of eNZi Secret component”,“time”:“2019-01-14T15:15:37Z”}
{“level”:“info”,“msg”:“Created a new Kubernetes master config and stored in etcd”,“time”:“2019-01-14T15:15:37Z”}
{“level”:“info”,“msg”:“Starting up ucp-kube-apiserver container”,“time”:“2019-01-14T15:15:37Z”}
{“level”:“info”,“msg”:“successfully reconciled state of Kubernetes API Server component”,“time”:“2019-01-14T15:15:38Z”}
{“level”:“info”,“msg”:“Reconciling state of component Concurrent [Swarm-Classic Manager Concurrent [eNZi API x86_64 service eNZi API s390x service] Concurrent [eNZi Worker x86_64 service eNZi Worker s390x service] Kubernetes Scheduler Kubernetes Controller Manager]”,“time”:“2019-01-14T15:15:38Z”}
{“level”:“info”,“msg”:“Service ucp-auth-api-s390x is desired to be running but is not running”,“time”:“2019-01-14T15:15:38Z”}
{“level”:“info”,“msg”:“Starting up ucp-kube-controller-manager container”,“time”:“2019-01-14T15:15:38Z”}
{“level”:“info”,“msg”:“Starting up ucp-kube-scheduler container”,“time”:“2019-01-14T15:15:38Z”}
{“level”:“info”,“msg”:“Service ucp-auth-api is desired to be running but is not running”,“time”:“2019-01-14T15:15:38Z”}
{“level”:“info”,“msg”:“Service ucp-auth-worker-s390x is desired to be running but is not running”,“time”:“2019-01-14T15:15:38Z”}
{“level”:“info”,“msg”:“Service ucp-auth-worker is desired to be running but is not running”,“time”:“2019-01-14T15:15:38Z”}
{“level”:“info”,“msg”:“successfully reconciled state of eNZi API s390x service component. This component will enable eNZi API servers on s390x linux nodes if they are added to the cluster”,“time”:“2019-01-14T15:15:38Z”}
{“level”:“info”,“msg”:“successfully reconciled state of eNZi Worker s390x service component. This component will enable eNZi workers on s390x linux nodes if they are added to the cluster”,“time”:“2019-01-14T15:15:38Z”}
{“level”:“info”,“msg”:“successfully reconciled state of Kubernetes Scheduler component”,“time”:“2019-01-14T15:15:38Z”}
{“level”:“info”,“msg”:“successfully reconciled state of Kubernetes Controller Manager component”,“time”:“2019-01-14T15:15:39Z”}
{“level”:“info”,“msg”:“successfully reconciled state of Swarm-Classic Manager component”,“time”:“2019-01-14T15:15:39Z”}
{“level”:“info”,“msg”:“successfully reconciled state of eNZi Worker x86_64 service component. This component will enable eNZi workers on x86_64 linux nodes if they are added to the cluster”,“time”:“2019-01-14T15:15:43Z”}
{“level”:“info”,“msg”:“successfully reconciled state of Concurrent [eNZi Worker x86_64 service eNZi Worker s390x service] component”,“time”:“2019-01-14T15:15:43Z”}
{“level”:“info”,“msg”:“successfully reconciled state of eNZi API x86_64 service component. This component will enable eNZi API servers on x86_64 linux nodes if they are added to the cluster”,“time”:“2019-01-14T15:15:45Z”}
{“level”:“info”,“msg”:“successfully reconciled state of Concurrent [eNZi API x86_64 service eNZi API s390x service] component”,“time”:“2019-01-14T15:15:45Z”}
{“level”:“info”,“msg”:“successfully reconciled state of [etcd Exclusive RethinkDB Concurrent [eNZi Secret Kubernetes API Server] Concurrent [Swarm-Classic Manager Concurrent [eNZi API x86_64 service eNZi API s390x service] Concurrent [eNZi Worker x86_64 service eNZi Worker s390x service] Kubernetes Scheduler Kubernetes Controller Manager]] component”,“time”:“2019-01-14T15:15:45Z”}
{“level”:“info”,“msg”:“Reconciling state of component UCP Controller”,“time”:“2019-01-14T15:15:45Z”}
{“level”:“info”,“msg”:“Reconciling state of component Kubernetes CNI Plugin”,“time”:“2019-01-14T15:15:54Z”}
{“level”:“info”,“msg”:“Deploying addon calico”,“time”:“2019-01-14T15:15:54Z”}
{“level”:“info”,“msg”:“Waiting for kubernetes node ucp-test.novalocal to become ready”,“time”:“2019-01-14T15:16:03Z”}
{“level”:“info”,“msg”:“Addon calico was deployed successfully”,“time”:“2019-01-14T15:16:03Z”}
{“level”:“info”,“msg”:“Reconciling state of component Kubernetes addons”,“time”:“2019-01-14T15:16:03Z”}
{“level”:“info”,“msg”:“Deploying addon kubedns”,“time”:“2019-01-14T15:16:03Z”}
{“level”:“info”,“msg”:“Addon kubedns was deployed successfully”,“time”:“2019-01-14T15:16:05Z”}
{“level”:“info”,“msg”:“Deploying addon ucp-controller”,“time”:“2019-01-14T15:16:05Z”}
{“level”:“info”,“msg”:“Deploying addon ucp-metrics”,“time”:“2019-01-14T15:16:06Z”}
{“level”:“info”,“msg”:“Deploying addon compose”,“time”:“2019-01-14T15:16:07Z”}
{“level”:“info”,“msg”:“Checking installation state”,“time”:“2019-01-14T15:16:07Z”}
{“level”:“info”,“msg”:“Install image with tag “e9c8673f4fd3df10a90e1542aa9bfde8e300f582” in namespace “kube-system””,“time”:“2019-01-14T15:16:07Z”}
{“level”:“fatal”,“msg”:“unable to reconcile state of Kubernetes addons component: error while deploying addon compose: context deadline exceeded”,“time”:“2019-01-14T15:26:04Z”}

1 Like

I have my suspicions that it is a timing issue. The wait time on the reconciliation job is too short. Any chance you can try it on a speedier VM?

I am having the same issue with Ubuntu 16.04 LTS with UCP versions 3.1.0, 3.1.1, 3.1.2 .

unable to reconcile state of Kubernetes addons component: error while deploying addon compose: context deadline exceeded
ucp-reconcile container exited with status code: 1
ERRO[0667] Unable to successfully setup local node. Run "docker logs ucp-reconcile" for more details 
FATA[0667] reconcile exited with non-zero status: 1

Clean all docker containers and images.
You can use this:

sudo docker swarm leave --force
sudo docker stop $(sudo docker ps -aq)
sudo docker rm $(sudo docker ps -aq) --force
sudo docker rmi $(sudo docker images -aq) --force
sudo docker network prune
sudo docker system prune --force

Use version 3.0.7 for UCP.

I was able to get its up. Finally.

2 Likes

Can you upgrade it to 3.1.2?

Once you have the UI, you can do it from there. And yeah you can upgrade.

I know that. I mean were you able to upgrade once you had 3.0.7 working?

Yes I was able to upgrade. Till 3.1.1 , 3.1.2 dint connect to port 443.
The other problem is, even if I got it running, the node was always unhealthy and down.

I couldn’t connect another node into the swarm network.

worldwire, I used a similar workaround except I installed 3.1.0 and then upgraded. It worked fine after that. No issues joining the nodes. Is everyone here experiencing this issue attempting to use cloud integration?

1 Like

I will try the same thing and will let you know how it works