DIND or not DIND? How to achieve this?

Hi all,

I’m currently busy to “dockerize” a BASH application I’ve written years ago. My application contains a lot of bash scripts; in short, the objective is to concatenate multiple markdown files in one, output, file and convert that file to PDF/DOCX (using pandoc) or play it like a slideshow (using reveal.js).

Today, my existing application use three Docker images:

  1. felixlohmeier/mermaid/ so I can convert a mermaid diagram as an image,
  2. pandoc/latex so I can convert my final .md file as a PDF file f.i.,
  3. webpronl/reveal-md to show the .md file as a slideshow.

This works fine.

Now, the dockerization. I wish to be able to run docker run MyOwnApp ... command to start my Docker instance and do all the magic. In one command.

From inside my MyOwnApp Docker image, how can I call the Docker image for converting Mermaid diagram to an image then continue running my own scripts?

I’ve read here and there that “Docker in Docker (aka dind)” is probably not a good idea and I’m fine with this of course.

Did anyone has an idea how I can achieve this? Run my scripts; then run something like docker run --rm -v "$PWD":/data:z felixlohmeier/mermaid -s -p -w 600 -t mermaid.neutral.css myflowchart.mmd to generat an image then continue my scripts and when the output file is done, run the PDF conversion like docker run --rm --volume "$(pwd):/data" pandoc/latex README.md -o outfile.pdf f.i.

Many thanks!

Docker is great when it commes to closed system or distributed systems. Though, this appears to be more a pipeline approach, which I feel is not well supported by docker.

I would recommend to create an image based on a Dockerfile that uses the relevant parts of the Dockerfiles of the other images.

Another option could be to create an image of the orchestration script and run the container with a bind for docker.sock and the docker cli binary (or even better download the generic docker cli binaries). If I remember correct this is called Dock out of Docker, as it the host’s docker engine will be used. Personaly. I favor the unified image approach over the DooD approach.

Thanks for your reply Meyay!

This was my idea too (i.e. implement in my Dockerfile each required steps like adding Mermaid, Pandoc and Reveal.md) so my Dockerfile will be self-sufficient. This was neverthless not my “way-to-go” choice because I need to isolate from these three images all the stuff I need to put in my own Dockerfile script and, also, maintain it.

It would have been really comfortable to just do a docker run to call up these images; without having to “integrate” them into mine. The size of my image would also have remained very small (just a few Bash scripts).

I fully understand meanwhile that it’s not possible; too specific for my own use case.

The point about having to maintain the image is a valid point.

Then you might take a look at Docker out of Docker, which allows to execute docker commands in a container that drives the host’s docker engine.

You can download the docker version of your choice in your dockerfile from Index of linux/static/stable/x86_64/ , extract the tar and remove everything extracted that is not needed to run the docker cli client (I guess it’s just docker) - ofc everything in a single RUN instruction to prevent unnecessary files to become part of an intermediate image layer.

When you run it as container make sure to bind -v /var/run/docker.sock:/var/run/docker.sock to the container.

Hello

More information on my side: I’ve decide to not reinvent the wheel i.e. if a Docker image already exists for, in my case, the mermaid conversion, the pandoc conversion or playing a markdown file as a slideshow, I don’t want to create one big Dockerfile with all that stuff (my scripts and things from here and there so the three needs are met into one, big, Dockerfile).

So, I’ve chosen to halt my script when a docker run command has to be fired by the user on his host machine. Below an example: my script will detect that mermaid has to be fired (f.i. the associated image is missing). I’ll write on the console a “Please run …” with the complete command to run (like below illustrated) then I invite the user to rerun my script.

docker run --rm -v $(pwd):/data:z -u $UID:$GID felixlohmeier/mermaid class.mmd --outputDir images

And I do this also for pandoc or reveal-md so, yeah, the user experience isn’t the best one (“only run one command and it’s done”) but my users is me and my teammates.

My own Docker image only contains my scripts so fully under my responsabilities and I don’t need to maintain things out of my scope.

If somebody else has a better idea, I’ll be happy to read and learn.

It would be really nice to be able to, from inside a Docker image, call another Docker images but this is probably the limitation of the technology (a Docker container can only access to the container, not the host except for mounted volumes).

Thanks for your time!

[EDIT]In fact, this is only OK in an interactive mode i.e. when the user is behind his computer. In a CI f.i., this approach won’t be OK (since fired on f.i. a GitLab server; not the user’s machine)[/EDIT]