Best way to build a parameterized container

I’m new to Docker and trying to build a reusable, parameterized container. So, I was wondering what’s the best way to do this? I know this question is a little bit philosophical but please bear with me.

The container I’m trying to build is a PXE server that needs to take two arguments (network ip address and network interface name). For now, I’m aiming for a single container - so I’m not looking for a solution that uses a second container as configuration provider (unless the solution is very simple).

Before continuing, let me define what I mean by “reusable files” below:

A file (Dockerfile or docker-compose.yml) is reusable if you can parameterize the resulting container(s) without changing the file’s contents.

While reading on the internet and thinking about the problem I came up with the following possibilities:

  1. Hardcode the parameters in the Dockerfile. Easiest solution but the container/Dockerfile is not reusable (if the parameters differ in a different environment).
  2. Use environment variables. This seems to be quite common.
  3. Use command line parameter passing.

For command line parameter passing I would need to do some conversion of the parameters before passing them to the containerized application. I couldn’t find a way of doing this with a Dockerfile. So I would need to write and use a wrapper bash script as entrypoint to do the conversion (inside of the container). However, this seems “complicated” to me and doesn’t sound like the Docker way to do things. (Of course, I may be wrong on this.)

Also, the call to docker run for the container is quite… involved (i.e. it takes a lot of parameters for ports and networks). So I created a docker compose file. With this, the question shifts to:

How to parameterize a docker compose file?

The only solution I found was to hardcode the parameters in the docker compose file. I couldn’t find anything about passing parameters to docker-compose up.

Base on my findings, my current conclusion is:

Dockerfiles should be reusable. To parameterize a container create/copy a docker compose file. (Thus docker compose files can’t be reusable in this case.)

Would you agree? Or is there something I’m missing?

Reading through this, I think you’ve essentially answered your own questions.

Using an entrypoint script does seem to be the normal solution. If you look at, for instance, the official PostgreSQL container, its entrypoint script reads several environment variables to control first-time setup before starting the database for real.

You can pass through environment variables, which might be one option. Another option is that docker run commands are just commands, so you can write shell scripts that run containers.

What sort of solution were you hoping for?

Hmm. That’s pretty advanced for a first container. Most typical Docker containers provide services that run on top of TCP (and I’d guess a majority of those, on top of HTTP). You’ll be forced to confront Docker’s network setup pretty quickly, and the usual recommendation of docker run -p to expose a single listening TCP port on the host won’t work here.

I get the impression the “new” Docker networking setup is in a couple of ways a little “more normal” and I sort of suspect it would be possible to have an otherwise-empty PXE-booting container plus a server like what you describe for a test environment, but I’m not totally sure.

Thanks. I’ll have a look at this.

I like the challenge ;)[quote=“dmaze, post:2, topic:21480”]
I get the impression the “new” Docker networking setup is in a couple of ways a little “more normal”
[/quote]

I’m not sure what you mean by this. I’ve only started using Docker with 1.12. With this, I have already been able to get a PXE server container running.