Docker proxy variables for multiple servers

I have a pretty complicated docker build system.
I use a CICD server to initiate builds on several target docker servers which sit at various subnets on my corporate network. The CICD machine is running the docker client locally, which is controlling the docker-daemon on the target servers remotely.

My single CICD server contains the source and the Dockerfiles
From my single CICD server I set $DOCKER_HOST to point the docker client at the each target that I would like to deploy to.

On each one of the target machines, I enabled the Docker REST API with: ExecStart=/usr/bin/dockerd -H fd:// -H tcp://127.0.0.1:2375
(Don’t worry, it’s protected with an SSH tunnel)

Until now, everything has worked flawlessly. I can use the same dockerfiles, docker-compose.xml and shell scripts to deploy on my CICD machine to deploy to any one of these hosts. All that I need to do is change my $DOCKER_HOST variable on the CICD machine to target the correct remote docker server.

However, now, one of my target machines requires a proxy to work. (It is located in a different network).

Just like the case with $DOCKER_HOST, I would like to set the proxy settings inside a script on my CICD machine. But I don’t want to set it globally in the “~/.docker/config.json” directory of said CICD machine. Because not all target machines deployed from the CICD machine use this proxy: If you think about it, docker.json is not something that “belongs” to the current CICD docker client. It really belongs to the target machine.

I am looking for a clean solution.
I am running on ubuntu 18.04
Here is what I tried:

  • *Setting the “~/.docker/config.json” on the target machine does not work, because the docker client is running on the CICD machine.

  • *Setting environment variables externally does not work (they are not carried over implicitly into the container)

  • *Manually specifying the proxy for every docker command is out of the question, as I have ~40 different docker commands in my scripts

  • *Configuring the docker dameon on the target machine in /etc/systemd/system/docker.service.d/override.conf with Environment="HTTP_PROXY=http://myproxy" does not work either

The only thing that seems to have an effect, is if I edit ~/.docker/config.json on the target machine, and run docker client locally on that machine, however, that is not where the Dockerfile normally resides, I would really like to use the remote api instead.

Have you ever considered to use a docker container as build agent. In one of our older jenkins deployment jobs we use the docker agent to perform the deployments. You could work with variables in your job to determin the target docker engine and if a (forwarding) http_proxy should be use used. Checkout sources in your container and let the build pipe do its job…

Shifting the problem into a container will not magicly solve the problem, but instead of messing with a build node, you will mess with a one shot container. Clean slate on each execution :slight_smile:

Does that make sense?

@meyay , This was going to be my next step, if I couldn’t get anything else to work :slight_smile: . I don’t like the thought, as this will create a very big build context: This docker container’s build context would have to include the entire contents of the project directory.