I have a hopefully not so uncommon issue, so that someone can easily point me to a working solution (googl’ing did not help so far…):
We host our application code with Atlassian Bitbucket. To build the application we build a docker container then publish it to our docker registry.
To build and push it, we use Bitbucket’s pipeline feature. So, a docker container is getting started which executes various docker commands to build and push the image to our local docker registry.
The problem is: The company is using “Z-Scaler” for security reasons, which means: Each and every HTTPS connection (and so the docker login/push/… commands) is getting hooked by Z-Scaler to check the traffic. This requires a root CA certificate from Z-Scaler, which is not part of the common CA certificates in any linux or windows installation. So to make it work we have to add the certificate to the system.
I added it to the docker host system: Put it to /usr/local/share/ca-certificates/ and run “update-ca-certificates” followed by a restart of docker service. Works on the docker-host very well: docker as well as wget and curl recognize the certificate and HTTPS is working.
But I need the certificate within the running docker container: I needed to add it to the running docker container by downloading and appliying the certificate on container during runtime (so it’s not part of the docker image itself). This works for curl and wget very well. But the docker commands, executed inside the docker container do not recognize the certificate.
As far as I know, I need to restart the docker service after adding a certificate. On the docker host, this is not an issue. But how can I make the certificate known to the docker commands inside the docker container?
I would like to avoid having a custom docker image that already contains the certificate for various reasons. But please tell me if this would solve my problem…
If you don’t want to run this at build time, you can call a script (bash for Linux, PowerShell for Windows) when the container is executed. Most likely, this will be your entrypoint at the dockerfile and inside the script you have to remember to point to the final entrypoint which will keep the container running. Inside the script, you would run a regular bash or PowerShell command as you would on a regular Linux or Windows machine respectively.
The bitbucket pipeline is very similar to the one on gitlab or github. It starts an appropriate docker image and passes some commands during runtime to it to do what you need to build, deploy, …
In my case, I’m building with this pipeline a docker image with my application in it. And finally I push it to my docker registry.
But: Each and every HTTPS communication is routed through Z-Scaler, whoch hooks with it’s own certificate chain into my HTTPS communication. The issue I now have is, that Z-Scalers CA certificate is not in any default CA certificate collection on any operating system. So the docker container I’m running is not aware of this. I need to install Z-Scalers root-ca.
I got this to work for linux commands like wget or curl, as well as for java.
But I’m not able to tell the docker binaries in this container that there is another certificate to consider when doint HTTPs connection.
One can sum it up to: How can I tell in a docker-in-docker situation that there is additional certificate?
I’m not aware of a command line option or so… All I found is: apply the certificate as usual and restart docker service.
Of course I asked the Atlassian guys what to do. But did not yet receive a proper answer. So I hoped that someone from the docker-specialists can shed some light on this docker-in-docker situation…
Have you considered binding a prepared cacerts file (or taking the one from the build node), that already includes your Zscaler ca certificate, into the docker in docker container?
I just looked at the image description on Dockerhub and found this snippet: