Docker Community Forums

Share and learn in the Docker community.

Node shows as available in kubectl when it's set to swarm only


(Sweharris) #1

I added a new worker node. I switched it to mixed (swarm+k8s) mode. So far, so good. After a period of time it showed in kubectl get nodes as Ready.

From the UCP GUI I then change the node to “swarm” mode. It now shows in the GUI as type Swarm. However, over 30 minutes later kubectl get nodes still shows it as available.

The node is still running kube-related containers; I’m not sure if that’s relevant or not

% docker ps
CONTAINER ID        IMAGE                              COMMAND                  CREATED             STATUS                  PORTS                                                                       NAMES
1410bff53ffb        docker/ucp-calico-cni              "/install-cni.sh"        30 hours ago        Up 30 hours                                                                                         k8s_install-cni_calico-node-5v68r_kube-system_bdff804e-0113-11e8-ad6d-0242ac110011_0
fa6ca1da679b        docker/ucp-calico-node             "start_runit"            30 hours ago        Up 30 hours                                                                                         k8s_calico-node_calico-node-5v68r_kube-system_bdff804e-0113-11e8-ad6d-0242ac110011_0
5dd97ad36414        docker/ucp-pause:3.0.0-beta2       "/pause"                 30 hours ago        Up 30 hours                                                                                         k8s_POD_calico-node-5v68r_kube-system_bdff804e-0113-11e8-ad6d-0242ac110011_0
b4f5d09cff7d        docker/ucp-agent:3.0.0-beta2       "/bin/ucp-agent agent"   30 hours ago        Up 30 hours             2376/tcp                                                                    ucp-agent.dlimfr7yj6ahj68g0xm5ozajl.vqez8otr1dn5x5xpkdarqfj0j
ed39d9ee4a9c        docker/ucp-hyperkube:3.0.0-beta2   "kubelet --allow-p..."   30 hours ago        Up 30 hours                                                                                         ucp-kubelet
ed743735769b        docker/ucp-hyperkube:3.0.0-beta2   "kube-proxy --kube..."   30 hours ago        Up 30 hours                                                                                         ucp-kube-proxy
f4548f5321d2        docker/ucp-agent:3.0.0-beta2       "/bin/ucp-agent pr..."   30 hours ago        Up 30 hours (healthy)   0.0.0.0:6444->6444/tcp, 0.0.0.0:12378->12378/tcp, 0.0.0.0:12376->2376/tcp   ucp-proxy

The UCP node logs don’t show any activity relating to this change


(Alexmavr) #2

Hey there sweharris, this is indeed the expected behavior in Docker EE 2.0. All nodes appear as Ready in both the swarm and the kubernetes node inventories. However, when the orchestrator selection is toggled for a node, that node will no longer schedulable for the other orchestrator.

In the example you mentioned, the node may appear to be both in docker node ls and in kubectl get nodes, but it is now missing the com.docker.ucp.orchestrator.kubernetes=true label. All kubernetes workloads deployed in Docker EE will automatically receive a NodeSelector against that label, so it’s no longer possible to schedule kubernetes workloads on that node.

I hope that clarifies things! Let us know if you have any further concerns


(Sweharris) #3

That feels like a mis-feature; when I use “kubectl” I expect to only see Kube resources…

Is there a documented “differences to upstream k8s” page somewhere? This sort of thing should be documented, at the very least, to minimise surprises.