Kubernetes failed to start docker desktop win 10

Hi, I recently rebooted y win 10 where I have Docker desktop up-to-date version (updated 3 days back).
Docker engine starts and I can see it in green state but Kubernetes always in red status with a message indication

kubernetes failed to start

. I disabled Kubernetes , did apply and restart, then reenabled it followed by apply and restart, no way, Kubernetes fails to start. I did a diagnose and send a report with ID

09777C54-E71E-4BED-A514-5F81A9DCC7BD/20231009122250

What can be the cause for this and where should I investigate? In the UI there is no error message nor an indication what might be going wrong regarding Kubernetes.

Thanks in advance

This is a community forum. We can’t see the reports.

Can you use any kubectl command like

kubectl get pod --all-namespaces

?
If you can, the API is running and the Desktop just stopped waiting for all the system containers. If the API is not working, you could try the docker commands, but kubernetes containers are hidden by default and you need to enable them in the settings where you enabled Kubernetes itself. There is a “Show system containers (advanced)” option. Then you can check the container logs using docker logs

EDIT:

I turned on my Windows and tried Kubernetes and failed the same way. I will try to solve it.

I entered the vm using the following command:

docker run --rm -it --privileged --pid host nicolaka/netshoot nsenter --all -t 1

and Reset kubernetes while I was checking the output of ps aux in the terminal (in the VM). I saw a kubeadm init command running and then stopping, so I copied the command out to ran manually, but I had to find out where I had to run it. I had the process ID from the ps aux command output so I used the same nsenter command that I used to get into the VM, but in this case I had to use that ID instead of 1. Finally I realized that was actually the container that runs the Docker daemon. So I ran this command (in the VM)

ctr -n services.linuxkit task exec -t --exec-id test 02-docker sh

and executed the kubeadm command manually

kubeadm init --ignore-preflight-errors=all --config /etc/kubeadm/kubeadm.yaml

Eventually I got some error message like:

could not find officially supported version of etcd for Kubernetes 1.27.2, falling back to the nearest etcd version (3.5.7-0)

and

[kubelet-check] Initial timeout of 40s passed

So there is some kind of bug here. I don’t have more time today, but if you want, you can report this issue on GitHub referring to this forum topic.

I couldn’t start Kubernetes on macOS either, but at least some Kubernetes container could start, but not the API server.

docker ps
 »  docker ps
CONTAINER ID   IMAGE                            COMMAND                  CREATED          STATUS          PORTS     NAMES
a745d2cb4251   97b0bebd519d                     "kube-controller-man…"   7 seconds ago    Up 6 seconds              k8s_kube-controller-manager_kube-controller-manager-docker-desktop_kube-system_861008677140df5bf14684241a098812_2
40d4eb129b1f   registry.k8s.io/kube-scheduler   "kube-scheduler --au…"   55 seconds ago   Up 55 seconds             k8s_kube-scheduler_kube-scheduler-docker-desktop_kube-system_42b55bbd22a41e1e397a84692d259b1e_0
5ace095a1553   registry.k8s.io/pause:3.9        "/pause"                 3 minutes ago    Up 3 minutes              k8s_POD_kube-scheduler-docker-desktop_kube-system_42b55bbd22a41e1e397a84692d259b1e_0
f1affc3df026   registry.k8s.io/pause:3.9        "/pause"                 3 minutes ago    Up 3 minutes              k8s_POD_kube-controller-manager-docker-desktop_kube-system_861008677140df5bf14684241a098812_0
0e0511e06147   registry.k8s.io/pause:3.9        "/pause"                 3 minutes ago    Up 3 minutes              k8s_POD_kube-apiserver-docker-desktop_kube-system_8b71cd624d40d0ffecf5822890467a47_0
69bfdbcd8160   registry.k8s.io/pause:3.9        "/pause"                 3 minutes ago    Up 3 minutes              k8s_POD_etcd-docker-desktop_kube-system_daab091f7b57c624d51aae7ab076cb00_0
docker logs k8s_kube-controller-manager_kube-controller-manager-docker-desktop_kube-system_861008677140df5bf14684241a098812_2
I1009 19:15:21.201935       1 serving.go:348] Generated self-signed cert in-memory
I1009 19:15:21.411977       1 controllermanager.go:178] Version: v1.25.9
I1009 19:15:21.411999       1 controllermanager.go:180] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
I1009 19:15:21.412657       1 secure_serving.go:210] Serving securely on 127.0.0.1:10257
I1009 19:15:21.412702       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
I1009 19:15:21.412776       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/run/config/pki/front-proxy-ca.crt"
I1009 19:15:21.412776       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/run/config/pki/ca.crt"
F1009 19:15:39.276233       1 controllermanager.go:221] error building controller context: failed to wait for apiserver being healthy: timed out waiting for the condition: failed to get apiserver /healthz status: Get "https://192.168.65.4:6443/healthz": dial tcp 192.168.65.4:6443: connect: no route to host

When I issue
kubectl get pod --all-namespaces
I get

E1009 22:19:47.039938 19384 memcache.go:265] couldn’t get current server API group list: Get “https://kubernetes.docker.internal:6443/api?timeout=32s”: EOF
E1009 22:19:57.171316 19384 memcache.go:265] couldn’t get current server API group list: Get “https://kubernetes.docker.internal:6443/api?timeout=32s”: EOF
E1009 22:20:07.280765 19384 memcache.go:265] couldn’t get current server API group list: Get “https://kubernetes.docker.internal:6443/api?timeout=32s”: EOF
E1009 22:20:17.412356 19384 memcache.go:265] couldn’t get current server API group list: Get “https://kubernetes.docker.internal:6443/api?timeout=32s”: EOF
E1009 22:20:27.560688 19384 memcache.go:265] couldn’t get current server API group list: Get “https://kubernetes.docker.internal:6443/api?timeout=32s”: EOF
Unable to connect to the server: EOF

I did turm on hidden containers I have

I am not sure once I issue
docker run --rm -it --privileged --pid host nicolaka/netshoot nsenter --all -t 1
then how can acces the VM and what is the relation beteen Kubernetes and the this VM !

Also, I have the same output when I issue
docker ps
I see API server up for 7 seconds

You are already in the VM. You can learn more about Docker Desktop from my presentation which you can find here in a blog post as well:

The difference is that the name of the container that runs the docker daemon has changed from docker to 02-docker.

Ok I will check this tomorrow morning as it is too late now in Paris. But what is strange as you can notice in the last snapshot I have 2 api server container, 1 is off after 7 seconds the 2nd is up and running

e4d4d5e5f127   97801f839490                "kube-apiserver --ad…"   9 seconds ago   Up 7 seconds             k8s_kube-apiserver_kube-apiserver-docker-desktop_kube-system_8e9132c31407bb3ec5eabb4d9d72cbf3_274
ed90a6d4f14a   registry.k8s.io/pause:3.9   "/pause"                 8 hours ago     Up 8 hours               k8s_POD_kube-apiserver-docker-desktop_kube-system_8e9132c31407bb3ec5eabb4d9d72cbf3_39

and when I do docker logs k8s_POD_kube-controller-manager-docker-desktop_kube-system_6c75172049c399028f4c1d6e23f5dbc7_39

I get notthing


In spite of the fact that it is running

That’s a pause container for the pod. It wil not log anything it just creates the kernel namespaces to which other containers in the pod can connect. You could see more in the API server container in the first line.

Note that I shared what I did only so you can learn about how Kubernetes in Docker Desktop works, but as I wrote, it didn’t work for me either so I doubt that we will solve it by checking more logs. Kubernetes didn’t start on Windows and macOS, so it is most likely a bug.

On the other hand, I didn’t see the API server started or maybe I just didn’t notice and it was deleted by the time I checked the list of containers. I don’t know.

rimelek, I finally did “Reset Kubernetes Cluster” and hop it is working now and I was able to issue kubectl commands against it. I think last week update has done something wrong or deleted something.

Great. Thank for the feedback. I tried resetting Kubernetes too, but it didn’t help me. I don’t use Kubernetes in Docker Desktop anyway, so that’s okay :slight_smile:

Yes, it is OK now and I started looking ate the video for fixing containers but wondering/interested if it happens again, is there a procedure or instructions on how to unearth the issue?

These issues shouldn’t happen and when happen, there is no instruction. If there were, it would probably not happen, since it would be obvious. The tricks I showed in the video are not for everyone, but people who really want to understand Docker Desktop and the technology behind it so they can solve some issues faster than people who are waiting for support on GitHub or form Docker Support. Nothing in the virtual machine of Docker Desktop was meant to be used by end users. It is just possible for people who are interested.

EDIT:

I tried resetting again and now it worked. Maybe something was fixed remotely and Docker Desktop could download and configure the requirements.

This is my case, I am Devops Expert and not end users, I do a lot of POCs and help companies to start their journies around the new technologies. I use docker since 2015 and have solved so many problems as well as Kubernetes on Ubuntu. I started recently enabling kubernetes with docker desktop for demo sakes and rapid POCs or teach some dev members

Good day! Thank you very much for your prompts, My Kubernetes works right after I click on Reset Kubernetes Cluster. In my case, I accidentally clicked on deleting some images for Kubernetes or maybe after updating it crashed somehow. But anyway, it’s rewarding giving it a try to check whether Kubernetes works again.

Hi @rimelek, this started again since a couple of days, now whatever I do (Enable/Disable Kubernetes, Reset Kubernetes) always fails to start

This time I could not reproduce the issue. Try to enable “Show system containers” and check the container logs if you find any Kubernetes container.

Hello again rimelek, well I did the following :slight_smile:
1 - I upgraded docker 4.25 to 4.26 it did not help
1 - Purge Clean data (which is a radical solution) once it finished Docker and kubernetes STARTED again but I lost all my images and containers.

I am more than sure that Kubernetes provided with Docker Desktop is not stable and we should have a way or logs to enable us to investigate why Kubernetes fails to start every now and then

Thank you for sharing your workaround. I wouldn’t call it solution, but at least it works again.

Make sure you always enable showing system containers so you can check containers logs directly. This is how I found out once why Kubernetes couldn’t start.

You can also try to use kubectl if the API server is running at least.

Unfortunately people also report that the Docker Engine itself can’t start sometimes… I still don1t know why it could happen as it never happened to me, but it could also be something related to the virtual machine in which Docker and Kubernetes is running and the communication witht he host which could be different on each platform. Let’s hope, someone can find a cause soon…

Hi Guys,

i switch on the WSL2 (default disable) and select the Linux distribution.
It works fine
Win 10 and Docker 4.28.1