Develop containers on embedded Kubernetes

I have what seemed like a very simple need but have been unable to get to end of task. What I want is to be able to test custom code under Kubernetes. I want to install this code using helm. I wasted days asking various AIs each with worse answers than the last and I’ll attempt to not recap them here.

I am using Docker Desktop 4.53.0. Kubernetes 1.34.1. I have tried both with Kind (my preferred configuration) and with Kubeadm (good enough if it would but work). I am using Containerd for images.

My initial problem is that the docker registry is not shared with Kubernetes. In theory by switching to KKubeadm from Kind it would be shared. It is not. In theory I should be able to install a registry that would fix this but port 5000 is already in use. This seems very suspicious and may be the root of some problems.

I admit to trying lots of different things so maybe there are some bad configs still laying around.

Anyhow once I get a shared registry I would like to be able to install my service using Helm (this part is important) and have connectivity:

  1. From local environment e.g. IntelliJ to the service
  2. From the service to Postgres running as a docker container
  3. From one service to another - basically what Kubernetes normally does on a normal day
  4. Ultimately I’d like to be able to scale up the replica count to test some concurrency. But I only mention that because it seems to push us towards LoadBalancer instead of NodePort which is yet another failure point.

It all sounds terribly simple. Like THE basic use case for buying the product sort of simple. But I have literally run out of ideas or places to ask and things to read. Every example either just doesn’t work at all - by far the most common - or it requires some convoluted fragile and time consuming process that makes it untenable for day to day operations.

I will gleefully accept any shame if someone can point me to a RTFM link! But my experience is everything written even a year ago is simply wrong. Regardless I am flat out of ideas.

Can you explain what you mean by that? A registry is where you can push images and you can pull them from the same registry on another server. All you need is access to the registry on the network and optionaly username and password when the registry is password protected.

You could use any port number. If you keep port 5000, you need to name images like myregistry:5000/myrepo:version. It is just the default port usually, but you don’t have to keep that port

I does, so we can most likely help here, but first we should understand the problem. I am still not sure I do.
My guess is that you mentioned “shared registry” as local image store and when you pull an image with docker pull, or build the image locally, the image is not avaiable in Kubernetes.

If I remember correctly, Docker Desktop uses the same image store for Docker and Kubernetes. But when you use “kind”, each Kubernetes node is a Docker container, so you can’t just use the same image store. Each node will have their own and not shared with the main Docker instance.

If that is the problem, then it is true, a local registry to which you push your images could help as long as all Kubernetes nodes have acces to it.

Another idea which I never tried could be loading local images manually into the container runtime of the Kubernetes nodes. It would only work if the node containers have the necessary binaries like docker or ctr. And you would first copy the exported tar into a node container. So installing a local registry would be easier especially when you often update images.

If you are looking for one, currently I would recommend Harbor if you need a nice GUI as well https://goharbor.io/

But you are right, a simple registry without authentication locally would be more then enough. Just change the port. You would also need to use the same hostname from the host and from inside the Kubernetes cluster.

I could continue, but I don’t want to assume something and write a long post about that, so please confirm, whether I understood or issue or not.

Docker DEsktop has a built-in load balancer for localhost at least. It is an old post but the related part is still true as far as I know

AI will not care about what you really want. It will usually act like it knows the answer and generate a text based on your text. I can imagine that the AI was confused about “shared registry” and didn’t try to understand the real problem. We do. so let’s see how much better we are at it :slight_smile:

UPDATE:

I just tested Docker Desktop with Kind and it found my locally tagged image even when I called it “localhost/test”. There is a “containerd-registry-mirror” in Docker Desktop, so I guess that was responsible for loading the image. Now I am even more curious about what I understood in from your question.

Thanks so much for your reply. I will try to explain more fully.

First to level set I created a good docker compose that will build my executable jar file into an image and load it into docker. I can then run the image and docker makes a container and runs it. I set port forwarding on the container 8080:8080 and I can reach it great from local Postman for example. The program running in the docker container reaches Postgres on the 172.17.0 docker subnet without any problem. In short everything works great in docker. So that sets where we are now.

What I have read, which I suspect was right at some point in time is that docker will share its image registry with Kubernetes if you use kubeadm. Port 5000 of course being the docker registry. A few releases ago docker did not have a default registry so I had to start one manually

 docker run -d -p 5000:5000 --restart=always --name registry registry:2

Now when I do that I get the error that port 5000 is in use. So one would expect Kubernetes could pull images from that at least for kubeadm. But with pull policy either IfNotPresent or Never (with Never being what I understand as the preferred value) the helm upgrade just fails to find the image.

With something running on port 5000 I expect that to be a registry although I do not see it in docker ps. I do not believe choosing another port to run what is probably another registry would be going in the right direction.

I am aware of the concept of exporting the tar file etc but that is not a path I am willing to follow. It seems like a very inelegant kludge and I am looking for something much more mainstream. Basically I want to use the same tools that work in “real” Kubernetes.

I also read a post talking about pushing images to docker hub. That would bypass the registry issue but it is completely unacceptable to push these images out into the internet. Although I must admit I could run a nexus or artifactory docker container. Maybe that is a path. I little weird. OK a lot weird when all I want is to load an image into a pod. But it could work.

Everything you said about kind and nodes and registry is my understanding as well. I think down that path is “roll your own installation”. I am not sure that is an entirely bad idea. Most of my problems appear to be because of docker not Kubernetes. Maybe Minikube would be better. I just have so very much invested in this solution and I do admit the docker desktop GUI is pretty pleasant on the eyes. I just don’t know.

I am confused by your statement about docker desktop and a built in load balancer. I was talking about the stretch goal of replicas inside Kubernetes e.g. Service Type. Anyhow at this point I’d call it a win if I could get any deployment to find any Local image.

The error message meant that the port was not available on your Mac host, not that there was a container running in Docker Desktop. There is actually a ControlCenter on Mac using port 5000

You can run this command:

lsof -i -P | grep LISTEN | grep :5000

Which will show something like this:

ControlCe   717   ta   11u  IPv4 0xc844200071790468      0t0    TCP *:5000 (LISTEN)
ControlCe   717   ta   12u  IPv6 0x4029cc889dca37b4      0t0    TCP *:5000 (LISTEN)

So if you want to run a registry container, you need to choose another port on the host or disable the Airplay receiver as described on stackoverflow, if you don’t need that feature. I just tested, it worked.

Then ignore what I wrote. We can focus on the first issue and go back to any other maybe even in a new topic.

I am not aware of any default registry in general. Maybe I just missed a new feature or forgot about an old one, but It is usually not needed locally. And when I tested Kind in Docker Desktop on my Mac, my locally tagged images were used in Kubernetes. I actually didn’t expect that, but it totally makes sense, since Docker DEsktop is for development so local images should be available without a lot extra work. It seems they are, but you mentioned ImagePullPolicy set to “Never”. If Docker Desktop just uses a kind of proxy to access local images, it has to pull from that, but when the ImagePullPolicy is “Never”, a new registry container will not help either.

Even with Kubernetes as target environment, people usually use Docker during development.

Just to share what’s possible with docker itself:

You can initialize the swarm mode, and deploy swarm services with docker stack deploy -c <compose file>. It provides a virtual ip and loadbalacing per service (based on IPVS, which is also used by kubernetes services to provide a virtual ip and loadbalancing), and distributes requests round-robin to the replicas.

If you choose not to use the swarm mode, and stick to docker compose deployments, you can still use replicas. But those don’t have a loadbalancer per service , so publishing ports will not work with replicas. You would need to introduce a reverse proxy (like traefik, caddy, nginx, haproxy,…) that publishes the ports and forwards traffic to the target service.

Of course, if you want to work on or test the deployment mechanics themselves, then there is no way around Kubernetes.

A few truisms of life in general and IT in particular seem appropriate here.

  1. It is not what you don’t know that will hurt you. It is what you think you know but are wrong about.
  2. Computers only deal in binary. A thing is either
    a. Letter Perfect
    b. Hopelessly Wrong

So! I never considered something else could be using port 5000. I certainly never thought about MacOS itself. So that is rule 1 in action.

Rule 2 can be restated “EVERYTHING IS BROKEN” until “Nothing Is Broken”. I had done a lot of stuff getting the helm chart just so, and switching from Kind to Kubeadm was certainly essential. But for all of that I still hadn’t gotten the image anywhere Kubernetes could find it. So I really hadn’t moved forward any at all.

Once I got the registry running. Which for future readers was as easy as

 docker run -d --restart=always -p [::1]:5001:5000 -p 127.0.0.1:5001:5000 --name registry -v ~/docker-registry:/var/lib/registry registry:2

The only other caveat is the image needs to be fixed (yes I need to make more helm variables)

replicaCount: 2

image:
  repository: localhost:5001/xxxxxx
  tag: "0.1.0"
  pullPolicy: IfNotPresent

Using the docker VPC 172 address for Postgress from within the Microservices works fine. I was hoping it would but wasn’t sure since I’d never gotten this far with docker.

External connections e.g. Postman get a system affinity because Postman does not close the network connection (at least that’s what I read and it seems true). However adding an HTTP Request Header “Connection=close” will break the quasi-affinity and you can get some decent round robin testing should you need that.

Really I can’t think of anything else I need from it right now. I can install my service with a helm chart and everything has reasonable connectivity (I’m not setting up Eureka) to everything else.

I was SOOOO Close. Which in IT looks a lot like Not Started. Thank you so much for helping me get across the finish line!

This topic was automatically closed 30 days after the last reply. New replies are no longer allowed.