HTTPS requests in a container don't work

Hi everyone,
I’ve foudn a few similar posts, but no answers that work for me. I cannot get HTTPS requests working inside locally running containers. My company uses ZScaler, but no one else has this issue. Seems like Docker might not be using the ZScaler cert. Here’s what I’m trying:

docker run -it maven:3.9-amazoncorretto-20 bash 
bash-4.2# yum update -y
Loaded plugins: ovl, priorities
https://yum.corretto.aws/aarch64/repodata/repomd.xml: [Errno 14] curl#60 - "SSL certificate problem: unable to get local issuer certificate"
Trying other mirror.
https://cdn.amazonlinux.com/2/core/2.0/aarch64/3dd2cc02a0909d35ada8de88675e34b826187bbb822cdf42454cabe6bfc2d6a7/repodata/repomd.xml?instance_id=timeout&region=unknown: [Errno 14] curl#60 - "SSL certificate problem: unable to get local issuer certificate"

OS Version = Ventura 13.5
Docker Desktop Version = 4.22.1

The error that stands out is

“SSL certificate problem: unable to get local issuer certificate”

Anyone got any ideas? The requests work fine outside of Docker

Is ZScaler performing TLS inspection?

If this is the case, the certificate of the CA the ZScaler uses to issue ad-hoc certificates would need to be present in the container as well.

You can check yourself by running the command from the first line:

docker run -it --rm maven:3.9-amazoncorretto-20 curl --verbose "https://cdn.amazonlinux.com/2/core/2.0/aarch64/3dd2cc02a0909d35ada8de88675e34b826187bbb822cdf42454cabe6bfc2d6a7/repodata/repomd.xml?instance_id=timeout&region=unknown"
*   Trying 54.230.206.64:443...
* Connected to cdn.amazonlinux.com (54.230.206.64) port 443 (#0)
* ALPN: offers h2,http/1.1
* Cipher selection: ALL:!EXPORT:!EXPORT40:!EXPORT56:!aNULL:!LOW:!RC4:@STRENGTH
* TLSv1.2 (OUT), TLS handshake, Client hello (1):
*  CAfile: /etc/pki/tls/certs/ca-bundle.crt
*  CApath: none
* TLSv1.2 (IN), TLS handshake, Server hello (2):
* TLSv1.2 (IN), TLS handshake, Certificate (11):
* TLSv1.2 (IN), TLS handshake, Server key exchange (12):
* TLSv1.2 (IN), TLS handshake, Server finished (14):
* TLSv1.2 (OUT), TLS handshake, Client key exchange (16):
* TLSv1.2 (OUT), TLS change cipher, Change cipher spec (1):
* TLSv1.2 (OUT), TLS handshake, Finished (20):
* TLSv1.2 (IN), TLS change cipher, Change cipher spec (1):
* TLSv1.2 (IN), TLS handshake, Finished (20):
* SSL connection using TLSv1.2 / ECDHE-RSA-AES128-GCM-SHA256
* ALPN: server accepted h2
* Server certificate:
*  subject: CN=cdn.amazonlinux.com
*  start date: Feb 28 00:00:00 2023 GMT
*  expire date: Nov  3 23:59:59 2023 GMT
*  subjectAltName: host "cdn.amazonlinux.com" matched cert's "cdn.amazonlinux.com"
*  issuer: C=US; O=Amazon; CN=Amazon RSA 2048 M02
*  SSL certificate verify ok.
...

I expect the issuer to be different in your output.

Thanks for your help :slight_smile:

This is what I get

docker run -it --rm maven:3.9-amazoncorretto-20 curl --verbose "https://cdn.amazonlinux.com/2/core/2.0/aarch64/3dd2cc02a0909d35ada8de88675e34b826187bbb822cdf42454cabe6bfc2d6a7/repodata/repomd.xml?instance_id=timeout&region=unknown"

*   Trying 18.172.153.95:443...
* Connected to cdn.amazonlinux.com (18.172.153.95) port 443 (#0)
* ALPN: offers h2,http/1.1
* Cipher selection: ALL:!EXPORT:!EXPORT40:!EXPORT56:!aNULL:!LOW:!RC4:@STRENGTH
* TLSv1.2 (OUT), TLS handshake, Client hello (1):
*  CAfile: /etc/pki/tls/certs/ca-bundle.crt
*  CApath: none
* TLSv1.2 (IN), TLS handshake, Server hello (2):
* TLSv1.2 (IN), TLS handshake, Certificate (11):
* TLSv1.2 (OUT), TLS alert, unknown CA (560):
* SSL certificate problem: unable to get local issuer certificate
* Closing connection 0
curl: (60) SSL certificate problem: unable to get local issuer certificate

Finally figured it out. The ZScaler cert needed to be copied into /etc/pki/ca-trust/source/anchors/Zscaler Root CA.crt, then update-ca-trust extract needed to be executed (this is for Amazon Linux distros)

1 Like

Yep, that’s the common solution if TLS inspection is used.

I just realized I forgot to respond to your previous post. I am surprised the curl tries to take the information from the issuer certificate itself instead of using the information embedded into the x509 certificate.

Is there a way to tell Docker about this without modifying containers? I can’t add the certificate into the container if I can’t pull the container in the first place.

The documentation seems to indicate that there are some environment variables which are named like I would expect for this functionality, and when setting them up, they don’t make Docker aware of the corporate CA which makes all of this work.

Those are two different layers: of course it is possible to add the ca certificate used by TLS inspection to the docker demon. As TLs inspection breaks the security context to the registry, you will need to treat it like a secure private registry:

Note: myregistry:5000 is just an example fqdn for a private registry.

The ca.crt used by the tls inspection must be placed in /etc/docker/certs.d/{fqdn of the registry}/ca.crt
I don’t recall whether the docker demon needs to be restarted to actually use the certificate.

On Docker Desktop for Mac the solution can be found in the faqs: https://docs.docker.com/desktop/faqs/macfaqs/#how-do-i-add-tls-certificates

I assume you mean the image, as it would be tedious to inject the certificates manually into each container.

Update: fixed a wrong wording

Alright, well now I’m even more confused. I’ll rephrase/restate: At my employer, Zscaler either does a Man In The Middle inspection of TLS traffic or simply proxies the traffic. In either case, the certificate that any HTTP client on my computer sees for any server is a Zscaler certificate signed by a certificate authority that my employer runs.

Is there a way to get Docker to trust this CA so that I can pull images and make HTTPS calls from within the containers, without creating custom container images which contain this certificate? When deployed, these containers don’t need to know about our company CA and don’t communicate through Zscaler, so they’re not needed in production, only while being run from development machines.

I have no idea what Zscaler is or does :slight_smile: In the original discussion it was established that Zscaler does TLS inspection.

So if neither building images (which actually runs a container per image layer), nor running containers are affected from TLS inspection, then there would be no need to modify the image.

Well there is a need, because currently I get certificate validation errors when building containers, and those are caused by Zscaler and its TLS certificate. We need to test the images on our development machines in some cases, so we need to be able to configure Docker Desktop to trust the enterprise certificate authority.

I can work around this for now by turning the proxy on in Docker Desktop settings and leaving all of the fields on that settings page blank (I have no idea why this makes it work) but this workaround won’t always be possible. My firewall & proxy team are going to make changes which will change how the proxy works at some level so that this workaround no longer works.

So, I do need to make the images work on development machines where Zscaler is a minor problem which will soon become a major problem.

@samjb addressed where to copy the ca.rt in a RHEL based image and how to update the ca truststore here. For debian, ubuntu and alpine based images the approach will be similar, though paths and the command to update the ca truststore might be different.

I addressed how to make the docker engine and docker desktop trust the ca certificate when pulling images here

Is it safe to assume that everything is sorted out now?

I feel like my question hasn’t been answered and that I’m not speaking the same language as you. I don’t want to modify the container image, if I can avoid it. So, either it isn’t possible to do what I want without that, or I’m not explaining what I need clearly enough, or both.

My infrastructure is getting to a point where I will not be able to pull containers, anyway. I can’t modify a container image that I can’t pull from a registry, so modifying a container image alone will not solve my issue. That’s why I don’t want to rely on that solution. I can’t rely on that solution.

It’s fine. I’ll just keep trying things until it works, or we’ll simply have to stop using Docker entirely.

I have read all messages and it seems to me that @meyay indeed told you what you can do. There is no common solution that makes HTTPS requests work from a container and from a host. The container is an isolated system so it needs its own set of CA certificates and the Docker daemon is running on the host so you need to configure the host to make docker pull work. Docker Desktop complicates things as the daemon is running in a virtual machine which you can’t change.and Docker Desktop needs to handle some files on the host and mount it or copy it into the virtual machine. I didn’t know that it was possible with certificates. I never had to use a Docker Desktop in an environment where certificates were changed.

Since you need to pull the images from a registry, if Zscaler changes its certificate you need to configure the daemon to trust the new one.

So again, one setting for the docker pull and another for containers.

You can still have development images with the only difference that you add certificates. Depending on who and how will deploy the containers you can also bind mount the certificate bundle file (the one that contains all the certificates) into the containers, but you need to know which distribution or applications expects it where. In case of Ubuntu, the current documentation says

  • As a single file (PEM bundle) in /etc/ssl/certs/ca-certificates.crt

but some distributions just symlinks from one path to another.