Hi everyone,
I’ve foudn a few similar posts, but no answers that work for me. I cannot get HTTPS requests working inside locally running containers. My company uses ZScaler, but no one else has this issue. Seems like Docker might not be using the ZScaler cert. Here’s what I’m trying:
docker run -it maven:3.9-amazoncorretto-20 bash
bash-4.2# yum update -y
Loaded plugins: ovl, priorities
https://yum.corretto.aws/aarch64/repodata/repomd.xml: [Errno 14] curl#60 - "SSL certificate problem: unable to get local issuer certificate"
Trying other mirror.
https://cdn.amazonlinux.com/2/core/2.0/aarch64/3dd2cc02a0909d35ada8de88675e34b826187bbb822cdf42454cabe6bfc2d6a7/repodata/repomd.xml?instance_id=timeout®ion=unknown: [Errno 14] curl#60 - "SSL certificate problem: unable to get local issuer certificate"
OS Version = Ventura 13.5
Docker Desktop Version = 4.22.1
The error that stands out is
“SSL certificate problem: unable to get local issuer certificate”
Anyone got any ideas? The requests work fine outside of Docker
Finally figured it out. The ZScaler cert needed to be copied into /etc/pki/ca-trust/source/anchors/Zscaler Root CA.crt, then update-ca-trust extract needed to be executed (this is for Amazon Linux distros)
Yep, that’s the common solution if TLS inspection is used.
I just realized I forgot to respond to your previous post. I am surprised the curl tries to take the information from the issuer certificate itself instead of using the information embedded into the x509 certificate.
Is there a way to tell Docker about this without modifying containers? I can’t add the certificate into the container if I can’t pull the container in the first place.
The documentation seems to indicate that there are some environment variables which are named like I would expect for this functionality, and when setting them up, they don’t make Docker aware of the corporate CA which makes all of this work.
Those are two different layers: of course it is possible to add the ca certificate used by TLS inspection to the docker demon. As TLs inspection breaks the security context to the registry, you will need to treat it like a secure private registry:
Note: myregistry:5000 is just an example fqdn for a private registry.
The ca.crt used by the tls inspection must be placed in /etc/docker/certs.d/{fqdn of the registry}/ca.crt
I don’t recall whether the docker demon needs to be restarted to actually use the certificate.
Alright, well now I’m even more confused. I’ll rephrase/restate: At my employer, Zscaler either does a Man In The Middle inspection of TLS traffic or simply proxies the traffic. In either case, the certificate that any HTTP client on my computer sees for any server is a Zscaler certificate signed by a certificate authority that my employer runs.
Is there a way to get Docker to trust this CA so that I can pull images and make HTTPS calls from within the containers, without creating custom container images which contain this certificate? When deployed, these containers don’t need to know about our company CA and don’t communicate through Zscaler, so they’re not needed in production, only while being run from development machines.
I have no idea what Zscaler is or does In the original discussion it was established that Zscaler does TLS inspection.
So if neither building images (which actually runs a container per image layer), nor running containers are affected from TLS inspection, then there would be no need to modify the image.
Well there is a need, because currently I get certificate validation errors when building containers, and those are caused by Zscaler and its TLS certificate. We need to test the images on our development machines in some cases, so we need to be able to configure Docker Desktop to trust the enterprise certificate authority.
I can work around this for now by turning the proxy on in Docker Desktop settings and leaving all of the fields on that settings page blank (I have no idea why this makes it work) but this workaround won’t always be possible. My firewall & proxy team are going to make changes which will change how the proxy works at some level so that this workaround no longer works.
So, I do need to make the images work on development machines where Zscaler is a minor problem which will soon become a major problem.
@samjb addressed where to copy the ca.rt in a RHEL based image and how to update the ca truststore here. For debian, ubuntu and alpine based images the approach will be similar, though paths and the command to update the ca truststore might be different.
I addressed how to make the docker engine and docker desktop trust the ca certificate when pulling images here
Is it safe to assume that everything is sorted out now?
I feel like my question hasn’t been answered and that I’m not speaking the same language as you. I don’t want to modify the container image, if I can avoid it. So, either it isn’t possible to do what I want without that, or I’m not explaining what I need clearly enough, or both.
My infrastructure is getting to a point where I will not be able to pull containers, anyway. I can’t modify a container image that I can’t pull from a registry, so modifying a container image alone will not solve my issue. That’s why I don’t want to rely on that solution. I can’t rely on that solution.
I have read all messages and it seems to me that @meyay indeed told you what you can do. There is no common solution that makes HTTPS requests work from a container and from a host. The container is an isolated system so it needs its own set of CA certificates and the Docker daemon is running on the host so you need to configure the host to make docker pull work. Docker Desktop complicates things as the daemon is running in a virtual machine which you can’t change.and Docker Desktop needs to handle some files on the host and mount it or copy it into the virtual machine. I didn’t know that it was possible with certificates. I never had to use a Docker Desktop in an environment where certificates were changed.
Since you need to pull the images from a registry, if Zscaler changes its certificate you need to configure the daemon to trust the new one.
So again, one setting for the docker pull and another for containers.
You can still have development images with the only difference that you add certificates. Depending on who and how will deploy the containers you can also bind mount the certificate bundle file (the one that contains all the certificates) into the containers, but you need to know which distribution or applications expects it where. In case of Ubuntu, the current documentation says
As a single file (PEM bundle) in /etc/ssl/certs/ca-certificates.crt
but some distributions just symlinks from one path to another.
“”"
I can work around this for now by turning the proxy on in Docker Desktop settings and leaving all of the fields on that settings page blank (I have no idea why this makes it work) but this workaround won’t always be possible.
“”" Thank you, this has saved me after many hours of research.
I’ve had Docker Desktop Mac version from ~2022 which was running Dockerfile builds no problem. Upgraded to Server: Docker Desktop 4.34.2 (167172) Engine: Version: 27.2.0 API version: 1.47 (minimum version 1.24)
and suddenly started getting “tls: failed to verify certificate: x509” on pulls. Mind outside of Dockerfile regular docker pulls were fine. This breaks any software which runs docker builds underneath - in my example Pulumi.
TLDR: When Dev upgrade Docker Desktop they expect stuff to JUST WORK. Dozens of forums explaining the intricacies doesn’t help if previous version “just worked” and when a 1 sec toggle in docker desktop to set up empty proxy also fixes this. My company’s Info Sec team looked at this issue vis a vis Zscaler certificates and after 9 months shrugged their shoulders and concluded it’s a docker problem since it worked before. This is what Devs face sometimes