Docker Compose Error - Local registry / Isolated Server

Hi All,

I’m looking for help as my docker engineer is unavailable expectedly due to bereavement. We have a couple of servers running overseas that apart from SSH access are isolated (so no access to online registries or docker hub). We have created a local registry on the primary server.

We are running a complex software solution with 18 containers. The build instructions for these containers are split accross several docker compose files. When we try to run:

“docker compose -f docker-compose-packx.yaml build --pull --force-rm” we get the following error:

=> ERROR [mainrelease_ale-db internal] load metadata for 10.0.101.251:5000/mariadb:latest 0.0s ------

[mainrelease_ale-db internal] load metadata for 10.0.101.251:5000/mariadb:latest:


failed to solve: rpc error: code = Unknown desc = failed to solve with frontend dockerfile.v0: failed to create LLB definition: failed to do request: Head “https://10.0.101.251:5000/v2/mariadb/manifests/latest”: http: server gave HTTP response to HTTPS client

I get similar error if I run any of the other docker compose files.

I’m looking for help to understand how to troubleshoot this error. We have allowed insecure registries in the docker config, and have tried to enable https for the registry - damon.json contains:

{
“insecure-registries”: [“10.0.101.251:5000”],
“allow-nondistributable-artifacts”: [“10.0.101.251:5000”],
“registry-mirrors”: ,
“experimental”: false,
“debug”: false,
“tls”: true,
“tlscert”: “/tmp/mainrelease/certs/myprivate251.crt”,
“tlskey”: “/tmp/mainrelease/certs/myprivate251.key”
}

registry config.yml:

version: “3”
services:
tmp_registry:
image: registry:2
ports:
- 5000:5000
environment:
- REGISTRY_STORAGE_FILESYSTEM_ROOTDIRECTORY=/tmp/registry
restart: unless-stopped
volumes:
- /tmp/registry:/tmp/ca4cd/registry

http
addr: :5000
tls: true
certificate: /tmp/mainrelease/certs/myprivate251.crt
key: /tmp/mainrelease/certs/myprivatekey251.key

  • Servers are running RHEL 9.00
  • Client: Docker Engine - Community Version: 20.10.22

Any pointers greatly appreciated

It looks like you have the same issue as I read today in another forum which is still open in my browser :slight_smile: and this is the issue which was mentioned there:

So the main issue is that your registry without TLS certs will work on HTTP but then you need to set the protocol in the list of insecure registries or otherwise it will be redirected to HTTPS. Then the server will expect HTTP request and get HTTPS.

Try this:

“insecure-registries”: [“http://10.0.101.251:5000”],

without the TLS certs. Now let’s talk about the TLS.

The tlscert and tlskey in daemon.json are for the Dcoker daemon. Since you can have multiple registries, it couldn’ be fore the registry. Even if the registry listens on HTTPS, if the certificate is a self-signed cert and not signed by a trusted, known ertificate authority, the registry will still be insecure.

If you set the certificates in the registry and turned TLS on, I am not sure why you still see the HTTP vs HTTPS error, but if the registry was listening on HTTP before, you can switch back until your Docker engineer can work again.

Many thanks for the response. I’ll try this to see if has a positive impact.

Appreciated!

Unfortunately the solution above does not fix the problem. every 12th container the error message is generated.

I’m stumped.

When using self-signed certificates, you might want to follow the documentations here: Test an insecure registry

Back in the day when I used to work with Docker Enterprise, I remember we had to add our self-signed certificates (actually created by UCP’s build in CA) to each node like this:

  • Linux: Copy the domain.crt file to /etc/docker/certs.d/myregistrydomain.com:5000/ca.crt on every Docker host. You do not need to restart Docker.

The subfolder of /etc/docker/certs.d must match the Domain/IP and port used to access the registry.

Note: the self-signed certificate must use the domain name or IP in the SAN - otherwise the certificate will fail validation, even after the certificate is added in /etc/docker/certs.d. Are you sure your SAN covrs the ip 10.0.101.251?

Is that the exact same error message? Did you restart the Docker daemon to load the new config?

IMPORTANT: I forgot to mention it before, but since there is no “live-restore” enabled in your config, restarting the docker daemon would also restart or stop containers running without restart policy. Without restarting the daemon, Docker will still see the old config

With the “http” protocol before the ip address it shouldn’t give you the same error as it wouldn’t try to send HTTPS request.

PS.: Just to clarify, I was writing about using http protocol as it was in the original file and @meyay’s advice helps you to configure the HTTPS protocol. Both could work, but the exact error message and knowing whether you restarted Docker or not is still important

Thanks for the support @meyay and @rimelek.

I’m not sure what I am doing wrong with the certificate setup so I started from scratch:
Create Certs Directory (/etc/certs)

mkdir /etc/certs cd /etc/certs

Generate private key for local CA

openssl genrsa -des3 -out myCA.key 2048

Passphrase used:

equityai infracast move passphrase for ca key generation 2023

Generate a root certificate

openssl req -x509 -new -nodes -key myCA.key -sha256 -days 1825 -out myCA.pem

Add the root certificate to Server CA bundle

cp myCA.pem /etc/pki/ca-trust/source/anchors/myCA.crt

Update root certificates

update-ca-trust

Create Private key for registry

openssl genrsa -out dk.registry.key 2048

Create CSR

openssl req -new -key dk.registry.key -out dk.registry.csr

Create a config file for x509

authorityKeyIdentifier=keyid,issuer
basicConstraints=CFALSE
keyUsage = digitalSignature, Repudiation, keyEncipherment, dataEncipherment
subjectAltName = @alt_name`
 
[alt_name]
DNS.1 = dk.registry`

Create Certificate

openssl x509 -req -in dk.registry.csr -CA myCA.pem -CAkey myCA.key -CAcreateserial -out dk.registry.crt -days 825 -sha256 -extfile dk.registry.ext

Create config file for the registry:

# cd /etc/docker/registry

 nano config.yml

 version: "3"
 Services: tmp_
 registry:    
     image: registry:2    
     ports: - 5000:5000    
     environment:       - REGISTRY_STORAGE_FILESYSTEM_ROOTDIRECTORY=/tmp/ca4cd/registry    
     restart: unless-stopped    
     volumes: - /tmp/ca4cd/registry:/tmp/ca4cd/registry
 http     
    addr: 10.0.101.251:5000    
    tls: true        
        certificate: /etc/certs/dk.registry.crt        
        key: /etc/certs/dk.registry.key        
        clientcas:
             /etc/certs/myCA.pem`

Copy Cert to docker Cert.d dir

cp dk.registry.crt /etc/docker/certs.d/

Start The Registry

docker run -d -p 5000:5000 --restart=always --name registry registry:2

Test with curl:
curl -v https://10.0.101.251:5000

    Trying 10.0.101.251:5000...
Connected to 10.26.188.251 (10.26.188.251) port 5000 (#0)
ALPN, offering h2
ALPN, offering http/1.1
CAfile: /etc/pki/tls/certs/ca-bundle.crt
TLSv1.0 (OUT), TLS header, Certificate Status (22):
TLSv1.3 (OUT), TLS handshake, Client hello (1):
(5454) (IN), , Unknown (72):
error:0A00010B:SSL routines::wrong version number
Closing connection 0
curl: (35) error:0A00010B:SSL routines::wrong version number

Will try loading images , taging them and pushing to the registry later, however am concerned I’ve screwed up the certs due to the curl error.

Can you repost this as Preformated text block (look for the </> icon in the edit navigation)?
I assume the yaml content underneath the volume declaration is supposed to be a configuration file for the registry? Please split it out to a new Preformated text block - furthermore your volume container path does not match the paths you use in the config yaml.

Further observations:

  • you are using a very old algorithm to generate your certificates. I am not sure if they are supoprted with tls1.2 or tls1.3. You might want to use something more recent.
  • If you use a domain name as SAN, and access the service using it’s ip, then of course the certificate will not match the URL used to access it - I already addressed this in my last response.

Thanks for the feedback.

I’ll try using a new cert type and address the mistakes highlights (container/ip addr v domain name) and see if this works.

Pete

So i have updated the Certs with a newer algorithm, and updated the SAN to be “private251.node” (this being the host name of the server).

I have also updated the config.yml file for the registry to use “private251.node:5000” as the HTTP address, for the registry.

I’m still getting the same CURL error.

# curl -v https://private251.node:5000
*   Trying 10.26.188.251:5000...
* Connected to private251.node (10.26.188.251) port 5000 (#0)
* ALPN, offering h2
* ALPN, offering http/1.1
*  CAfile: /etc/pki/tls/certs/ca-bundle.crt
* TLSv1.0 (OUT), TLS header, Certificate Status (22):
* TLSv1.3 (OUT), TLS handshake, Client hello (1):
* (5454) (IN), , Unknown (72):
* error:0A00010B:SSL routines::wrong version number
* Closing connection 0

Should the registry be port 443? or will TLS still work on 5000?
The docker daemon.json has the insecure registries entry still showing the IP Address - should this be updated as well to private251.node?

You current configuration is incorrect and incomplete.

This is not part of the compose yml. It needs to be in a separate file called config.yml:

 http     
    addr: 10.0.101.251:5000    
    tls:
        certificate: /etc/certs/dk.registry.crt        
        key: /etc/certs/dk.registry.key        
        clientcas:
             /etc/certs/myCA.pem`

Note1: the tls element had a nonsense true value, which is not valid yaml, as the child elements are already the value of this elemnt.

Note2: the documentation shows this can also be archived by using environment variables, so it is not really necessary to use your own config file

Then add this volume mapping in your compose file underneath the first mapping to map the config.yml to the location it is expected.

      - /whereever/config.yml:/etc/docker/registry/config.yml 

Then add this volume mapping in your compose file underneath the 2nd mapping you just created to map the certifcates you created into the container path your config.yml specifies:

      - /whereever/certs:/etc/certs

The host side (left) of the volume mapping is just a placeholder. Of course, you need to replace them with the paths you actually used.

Also, you might want to dig into Configuring a registry.

Thanks @meyay

There is no compse yaml, only the config.yml stored in /etc/docker/registry/. I inherited this config.yml when I was asked to look at this while our docker engineer is on leave. If I understand you correctly the 1st half of the file (prior to http statements) should be in a separate docker-compose.yaml file. Where should this be stored?

As we are airgapped I am using docker load -i registry.tar to load the registry into docker before I then can runf the docker run command.

Currently I wouldn’t store it anywhere. It is not clear why that part is in the config yaml and if it is there, it is likely that the config file was not used at all or something parsed it and created separate files to start the container.
Although the second time even your compose part was completely invalid and it doesn’t mount any config and mounts only a tmp folder which doesn’t look like a production registry. All of your quoted codes were invaid so far. I didn’t notice it first as you used quote instead of code block (</> button next to the quote button) so the indentations were not kept.

Is this really how you start the registry? It just starts a registry without any config file… It is not surprise if it doesn’t use SSL…

You should really fin dout how that registry was designed to run. I don’t want to suggest anything that may break that. Some of your codes indicate that you use compose, some of those shows you use docker run. In compose only a tmp folder is mounted and it is unlikely that you would store images in tmp. So right now I am not even sure the files that you shared change anything.

Did that registry really work before?

Thanks for the feedback @rimelek.

You are quite correct this is NOT a production registry, it is in an airgapped dev environment. The root of this challenge is that the solution being run in docker was designed to run on servers with internet access, and now for security reasons has to be deployed in an airgapped environment.

Our company (startup, 4 people) inherited the software and we are pretty much learning by doing. I haven’t done any serious linux work for over 20yrs, and docker is brand new to me. Hence a lot of obvious mistakes. I have no idea why /TMP is being used, and this will have to change for the real deployment. But for dev work I think it is ok

I am going to rebuild the config.yml now that I have read the documentation.

Thanks again for the feedback and help.

Let’s start from the beginning. The Docker documentation has multiple examples how you can start a registry. You can also find example with compose.

https://docs.docker.com/registry/deploying/#deploy-your-registry-using-a-compose-file

That shows the parameters, but you would need to read eveything before that to be able to set the varibales properly.

I show you the easiest way with wich you can still have persistent storage for the images. I will write it in a way so it can be useful for anyone. That is why I will mention parts that you don’t need like tagging the image.

Create a project folder somewhere . I will use $HOME/registry but you can change it. I use it because this is a folder that I am sure you have access to.

project_dir="$HOME/registry"

Create and go to the project folder

mkdir -p "$project_dir"
cd "$project_dir"

Create a file called docker-compose.yml with the following content:

services:
  registry:
    restart: always
    image: registry:2
    ports:
      - 5000:5000
    volumes:
      - ./data:/var/lib/registry

Based on your command in your first post, I assume you have Docker Compose v2, so this file should be valid.

Run the following command to start the registry:

docker compose up -d

Test, if the registry is running.

docker compose ps

You should see something like this:

NAME                  IMAGE               COMMAND                  SERVICE             CREATED             STATUS              PORTS
registry-registry-1   registry:2          "/entrypoint.sh /etc…"   registry            8 seconds ago       Up 7 seconds        0.0.0.0:5000->5000/tcp, :::5000->5000/tcp

The result is just two lines, only the current forum theme shows it this way.

Assuming you have an existing image that you want to push to the registry, retag the image to use “localhost” (don’t skip this step).

docker image tag <YOUR_IMAGE> localhost:5000/test

Push the image to the registry:

docker push localhost:5000/test

Docker should accept localhost without configuring any insecure registry. At least this is what happens with Docker 23.0.1. I don’t remember older versions.

The next step is that you want to use the IP address of the server so you can access it from an other machine in your airgapped environment. Using the server’s IP address requires configuring insecure registries in the daemon.json. Before you change it, you can copy it somewhere in you want to restore it later.

Without the TLS part, and other optional settings, use only the following one config in /etc/docker/daemon.json

{
  "insecure-registries": ["http://10.0.101.251:5000"],
}

http:// at the beginning is important. Restart Docker daemon. The new configuration would not be applied without it.

sudo systemctl restart docker

Make sure you copy the content of the daemon.json correctly. Otherwise the docker daemon may not sstart again.

Now retag your image to contain the IP address

docker tag <YOUR_IMAGE> 10.0.101.251:5000/test

Push the image

docker push 10.0.101.251:5000/test

You should not see any error message. I ran it locally in my virtual machine.

If this works, only then you can try to add SSL support and add allow-nondistributable-artifacts back to the config file. I intentionally left it out, because I am not sure if that requires starting with http:// as well.

So as a first step, make sure the simplest registry works and you can improve it if you want to. If you can access the registry only from the airgapped environment, TLS is not so important. In production you would configure a valid certificate signed by a known CA, so that would not be insecure for the Docker client.

@rimelek Many thanks again for the comprehensive breakdown.

I’ll go through this step by step and let you know the outcome.

Pete

Well this seems to have worked.

All images are pushed to the local repositry and all build scripts run.

I have a few containers that fail to stay running, but that is another fun project. @rimelek, @meyay many thanks for the help and assistance. Very much appreciated.

Glad that you got it working.

I feel @rimelek 's last post should be checked as solution, as it’s the one that will help others having the same problem to tackle it.