Unable to run embedded Kubernetes on Docker Desktop Mac

Docker Desktop - 4.18.0 (104112)
Kubernetes: v1.25.4
MacOS 13.2.1 (22D68)

I enabled Kubernetes option and try to use it. According to Docker Dashboard, it starts successfully, but “Kubernetes” menu is still inactive in Docker
Screenshot 2023-04-18 at 14.57.20

When I do kubectl cluster-info I get:
The connection to the server localhost:8080 was refused - did you specify the right host or port?
This error usually points on error with Kubernetes’s config file missing. My guess was confirmed with: kubectl config view, which did empty result:

apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

I checked that config file still resides at my user folder “.kube/config”. Setting appropriate environment variables makes the things working until reboot: export KUBECONFIG=/Users/Rage/.kube/config, so I’m getting the right info via kubectl cluster-info till the reboot.

So what do I need to repair in Docker desktop to make Kubernates fully working again out of box?
I tried to enable/disable it, 'Reset Kubernetes cluster" and fully uninstall and then install Docker again without success I might add.
I don’t want to break the things inside Docker acting on my own to get a new updates unworkable.

That’s odd. That menu item is available for me even when Kubernetes is not enabled. My macOS version is 13.3. Newer than yours but that small difference shouldn’t matter.

Let’s say something happened during the installation and your kubernetes is really running. In that case you could copy the the admin config out from Docker Desktop:

docker run --rm -it --privileged --pid host ubuntu:20.04 \
  nsenter --all -t 1 \
    -- cat /etc/kubernetes/admin.conf > ~/.kube/config-internal

If your original config file is empty as you wrote, you can just rename config-internal to config and use kubectl. If that doesn’t help, then it is possible that Kubernetes is not running properly and Docker Desktop could not finish generating the config file. Then go back to Docker DEsktop settings where you enabled Kubernetetes and also enable “Show system containers (advanced)”. Then you can use the Docker command to see the containers.

ONe more thing I can think of is that you have some beta features enabled lke “Use containerd for pulling and storing images”. I don’t think that should be a problem, but you can check that and disable it if it is enabled.

I lso have “Use Rosetta for x86/amd64 emulation on Apple Silicon” beta feature enabled. I wouldn’t disable it for now, but if nothing helps, you can try to enable it to make your config more similar to mine.

In the General settings I also have “VirtioFS” enabled below “Choose file sharing implementation for your containers”. VirtioFS is usually faster and better then the alternatives, you could try to enable that too. I doubt that it would make a difference in this case though.

Since I could not reproduce the issue of the grayed-out "Kubernetes "menu item, it could also be a bug which you can report here:

Hi, Rimelek! Thank you for your attention.
I executed this command, but an overcome was suspicious. I did all this under su for sure.

sh-3.2# docker run --rm -it --privileged --pid host ubuntu:20.04 \
>   nsenter --all -t 1 \
>     -- cat /etc/kubernetes/admin.conf > ~/.kube/config-internal
Unable to find image 'ubuntu:20.04' locally
20.04: Pulling from library/ubuntu
8659cf1709ef: Pulling fs layer
8659cf1709ef: Download complete
8659cf1709ef: Pull complete
Digest: sha256:db8bf6f4fb351aa7a26e27ba2686cf35a6a409f65603e59d4c203e58387dc6b3
Status: Downloaded newer image for ubuntu:20.04

sh-3.2# cd ~/.kube/
sh-3.2# ls
cache		config-internal

sh-3.2# mv config-internal config

sh-3.2# kubectl get nodes
Unable to connect to the server: dial tcp: lookup kubernetes.docker.internal on 8.8.8.8:53: no such host

So it still hasn’t seemed to work.
But then I remembered, that the previous command helped me KUBECONFIG=/Users/Me/.kube/config. When I previously added this environment variable, all Kubernates instance inside Docker started to work fine. I was able to create deployments, services, install ingress, etc.

So I just copied this file /Users/Me/.kube/config to ~/.kube/, restarted Docker and Mac and its seems to be fine now:

sh-3.2# kubectl get nodes
NAME             STATUS   ROLES           AGE   VERSION
docker-desktop   Ready    control-plane   2d    v1.25.4

So, thank you so much! I’ll continue to work with it the next couple of days and will report here.

Since I could not reproduce the issue of the grayed-out "Kubernetes "menu item, it could also be a bug which you can report here:

I did it partially in this issue: https://github.com/docker/for-mac/issues/6810. If you need me to separate this into its-own separate ticked, I can do it.

I am just a community member, GitHub is not my area :slight_smile: If you reported the issue on GitHub, they can tell you if you should create a new issue.

Wait, isn’t ~ points to /Users/Me? Did you work as a root user? If you did, I understand why the kube config was empty in the home of root.

Yeah, as far as I remember, ~ it should be pointed inside my current user’s dir and it points there for sure till I’m starting to work with su.

I have to always use su to work with Docker Desktop, because otherwise I’m unable to run commands for Docker/Kubernates under my regular user, despite it has admin rights.

When I type, for example, docker info I’m getting zsh: command not found: docker. But under sudo it runs fine. I googled a lot in searching of how to overcome this problem and even created the topic, which is not answered yet: Enable docker commands run without sudo

So if my guess is right, the problem is that when I run Kubernete’s commands under su, I cannot get correct output from kubectl, because of its searches it’s config file inside the root’s home dir and not in my homedir, where the right version is actually placed. So thats why my temp fix with KUBECONFIG=/Users/Me/.kube/config actually worked.

But the new question is what I have to do now to make my setup working right… It turns out that now I need to keep both configurations (in root and user) equal each other to avoid side-effects.

Then I think we should find out why the docker command is not found. I commented in the other topic.

“su” means Super User. When you run sudo su you can add sudo --preserve-env su to keep environment variables like KUBECONFIG, but there is another difference. Every user can have different default shell. On MacOS, a non-root user’s default shell is zsh, but when you run a log in as root using sudo su you get /bin/sh, which is actually GNU Bash. Normally you shouldn’t use root user and Docker Desktop does not require it.

1 Like

rimelek

Normally you shouldn’t use root user and Docker Desktop does not require it.

Got you, thanks. So at first I need to resolve, why I can’t issue docker commands under my regular user with admin rights without sudo and then get back to this topic if it would be still actual.

As I proposed, this issue was a consequence of another:

Solved now. Thanks, to Rimelek.