Basic container concepts: How to structure a CI/CD pipeline agent

Hi,

I’m busy learning Docker, by doing, so please forgive me for the basic questions.

If I was building a website, I get that each “task” should be in its own container, e.g NGINX, Apache, DB and I could write a compose file to “group” them into a single service.

However, I have a task to create a GitHub Actions runner with Puppet’s PDK and client-tools as a container and I am struggling with some basic container concepts.

I am using my own Nexus private repo and have written my own Dockerfiles based on ones found around the Internet. I guess this too, is defeating the concept somewhat. I have done it to save waiting for containers to download each time I use them and as part of my learning.

Container 1

I have built a GitHub Actions runner image using this guide. It works in that I see it registered in GitHub and I can use it in my pipeline, however, it doesn’t have the required tools on it, namely PDK. It has some environment variable that allow it to register to my GitHub account.

Container 2

I have created another image using Puppet’s PDK Dockerfile as a reference, however, I have replaced their FROM value with the “path” to my Container 1 (GitHub runner) and installed client-tools.

If I use the PDK example as is, the entrypoint is set to the PDK executable, so when I try and start the container and look at the logs, I see it telling me how to use the pdk command. I get why it’s doing this, as I am not giving pdk any arguments. It feels as though this is designed to be used once for each command and looking at some of the examples, I need to run docker run -i -t puppetlabs/pdk:latest [some pdk command] each time I want to use it. This seems rather inefficent to me.

Given the above, should I:

  1. Write my own Dockerfile that includes all the tools I need.

  2. Use Container 1 (GitHub runner) as my base and use the GitHub Docker action to use the PDK container on the fly? I think this is know as nesting containers.

  3. Build Container 2 from Container 1, but change the entrypoint to a shell, perhaps use the same entrypoint as specified in Container 1?

  4. Write a Dockerfile that utilises multi-stage builds feature?

  5. Use a different concept altogether?

Container 1 has some environment variables. Should I declare these when running Container 2?

Any help or pointers would be greatly appreciated.

T. I. A.