How can i stop deleting and reinstalling Docker for Mac because Kubernetes won't start anymore?

I start to get a REALLY tired of having to completely remove Docker Desktop for Mac every time Kubernetes is forever stuck at “Starting”. I have investigated and searched on Google and have found three things:

  1. Nobody knows why this is happening
  2. This has been happening for years now
  3. The only way to resolve this is a complete wipe and reinstall.

So my question is: What is causing this and how do i get Docker for Mac again running properly including Kubernetes?

  1. That is probably true
  2. I don’t know about that, but possible
  3. I am sure it is not true, because I had this problem multiple times recently. In some cases it happenes multiple times in a day when I change configurations and install alternative softwares. First time I gave up and some days later I tried something again and worked

This is what I do when Kubernetes can’t start:

  1. Reset Kubernetes
  2. Disable Kubernetes
  3. Enable Kubernetes.
  4. If the above doesn’t help, I disable Kubernetes again and stop Docker.
  5. I start Docker again
  6. Go to the settings and enable Kubernetes. At this point it is usually works.

To be honest, I am not sure if it is Kubernetes which cannot start or only the Desktop cannot tell me if it has already started. Last time I saw it was “starting” for minutes, so I went to the terminal and listed containers. Kubernetes was actually running. Then I went back to the Desktop and it showed me “Running”.

I still don’t know wwhat is the reason of this issue, but I don’t remember more then one case when I had to reinstall Docker Desktop on my Mac.

So I recommend you to enable showing Kubernetes containers from terminal and check the containers from there. If you see everything is running and if my solution doesn’t work for you, you can also delete the ~/.kube/config and try the steps again. It should recreate the config file but I am not sure about that so don’t start with this.

I hope the steps I described above will help you too.

I had a new experience with Kubernetes and Docker desktop. I might help you too. Kubernetes could not start according to the Desktop and it even went to red indicating that it failed. So I ran

kubectl get node

The node was ready, so I ran

kubectl get all --all-namespaces

Where I saw some pods could not start:

NAMESPACE     NAME                                         READY   STATUS                   RESTARTS        AGE
kube-system   pod/coredns-6d4b75cb6d-nhxh4                 1/1     Running                  2 (6m25s ago)   51d
kube-system   pod/coredns-6d4b75cb6d-x2q5r                 1/1     Running                  2 (6m25s ago)   51d
kube-system   pod/etcd-docker-desktop                      1/1     Running                  2 (6m25s ago)   51d
kube-system   pod/kube-apiserver-docker-desktop            1/1     Running                  2 (6m25s ago)   51d
kube-system   pod/kube-controller-manager-docker-desktop   1/1     Running                  2 (6m25s ago)   51d
kube-system   pod/kube-proxy-kx498                         1/1     Running                  2 (6m25s ago)   51d
kube-system   pod/kube-scheduler-docker-desktop            1/1     Running                  2 (6m25s ago)   51d
kube-system   pod/storage-provisioner                      0/1     Error                    0               51d
kube-system   pod/vpnkit-controller                        0/1     ContainerStatusUnknown   1 (48d ago)     51d

Since no container was running in those pdos, I couldn’t have check the container logs, but I could check the description of those pods with the events:

kubectl describe -n kube-system pod/storage-provisioner 

At the bottom of the output it was the end of the events:

  Normal   Started         48d   kubelet  Started container storage-provisioner
  Warning  Evicted         48d   kubelet  The node was low on resource: ephemeral-storage.
  Normal   Killing         48d   kubelet  Stopping container storage-provisioner

As you can see it was long time ago and there was no other events (just older). I could not start these pods so I did deleted them:

kubectl delete -n kube-system pod/storage-provisioner 
kubectl delete -n kube-system pod/vpnkit-controller

There was nothing to recreate them like deployments, so I restarted Docker Desktop and Kubernetes was green again.

I remember there was a time when I didn’t have any free space because of one vulnerabilty checker Docker Desktop Extension that used about 40 gigabytes, and the last error message indicates that those containers stopped because of the lack of space, but I guess some other errors could happen too which causes some pods not to start, so next time you can check if kubernetes works but one or two pods can’t start so you can fix those and restart Docker Desktop.

2 Likes

this works perfectly. Thank you for the answer, it saved lots of my time. Before this answer i had to reset/purge full cluster to working every 3-4 days!

Thanks for this suggestion and this approach worked for me; but I only deleted all the docker-desktop entries (I left all the other entires alone since Docker Desktop was the only cluster with an issue), then reset my Kubernetes on Docker Desktop cluster.

Aside: If you don’t want to manually edit your ~/.kube/config file, then I think you can run the following commands:

kubectl config delete-context docker-desktop
kubectl config delete-cluster docker-desktop
kubectl config delete-user docker-desktop
1 Like

Good idea. Thank you for the commands.