Build container from private github repo with local dockerfile

Hello everyone,

The title speak for itself. I have to create a dockerfile for my company and it must clone the private repo where all the server files are. Since i can’t order to clone the repo in the dockerfile cause i have to enter credentials i discover that you can build an image directly with a github repo url. But i don’t want to put (not immediately) and i’d like to do it with a local dockerfile. Is this possible ?

Thank for your responses !

This is possible. I would

  1. create and ssh key pair.
  2. add the key to git hub
  3. copy the key to the container in your docker file
  4. use ssh to authenticate to github and pull the repo…

This should let you pull private and public repos via ssh.

Thanks for your reply ! I heard of this solution but if it’s possible i would like to avoid it. It might be a little dangerous to let the ssh key in the dockerfile. If possible i’d like to try this solution if it’s possible :

Is this the official way to handle this situation?

I guess (1) makes the process relatively benign as they key-pair are only being used for this purpose, but (3) does make me feel uneasy.

+1 to Dan. I’m curious about how to do the same, but using docker-machine and a cloud service

So to do (3) for a local machine, one could

COPY ~/.ssh/id_rsa ~/.ssh/id_rsa

For docker-machine with cloud provider, I’m not sure how to accomplish “(3) copy the key to the container”

What we do is inject via environment variables any sensitive material. Since we have CI for our Docker images, those secrets are stored safely in the CI environment.

The build pipeline then passes those variables to Docker using args. Inside the build, those secret are either materialized into a file (echo $secret > file) that we later remove or piped into the command if it supports it (echo $secret |

That also allows developers to use their own credentials when they build the image on their workstation. They also use build args and specify their creds.


The build pipeline then passes those variables to Docker using args

Is that secure though @aleveille? From the Dockerfile reference that you linked to:

Warning: It is not recommended to use build-time variables for passing secrets like github keys, user credentials etc. Build-time variable values are visible to any user of the image with the docker history command.

No it’s not. The process I describe are for “shared” secret, if such a thing exists. What we can inject in such way is typically read-only user credentials or something similar. This allows the image to pull artifacts from our private repository.

Since all our developers have read+write access to the repo, having a read-only user in the docker file is of minimum risk in our case. Injecting the password at build-time also allows for easy password rotation every now and then, so that if someone left the company, he can’t use those shared credentials forever.

For more “secret” secret, we are doing something else (which has its flaws) so we’re looking into secrets vault.

1 Like

You don’t want to let the SSH key leave traces in the resulting image. If you ADD it during the build, or write a file with RUN echo […] , it will linger in one of the layers, even if you do a RUN rm afterwards.

There are two ways to use a secret SSH key during the build and still get a clean image afterwards:

  • Squashing (running build with the --squash flag) to create a single diff between the first and last layer, leaving out intermediate states.
  • Using multi-stage builds. You can add a private deploy key into an intermediate image, clone everything you need, and only pass the cloned repo directories to the final image.

You can read more here.

I wrote about how we solved it at Connected Cars, where we inject a github token with the docker --build-arg to avoid it being committed to the container.

1 Like