A bit of a docker newb here, I am trying to see what is a solid way of creating an extremely flexible C build environment with minimal images for each tool needed.
The obvious way is to have a one-shot (i.e. not a service) alpine image where I install make, gcc and everything else needed and pass the source in a volume do my transformations (source to binary) and everything is very easy.
However now I am thinking that if I want to maintain X versions of make with Y versions of gcc I would have to create X*Y Dockerfiles to have all combinations.
That led to the idea of maybe having an alpine container for each version of make and an alpine container for each version of gcc and then combining the two somehow. This would lead in a reduction by only needing X+Y Dockerfiles.
Another reason that leads to this idea is that people make it out like there should be one process per container.
However I have hit a dead-end because I am not sure how to call gcc in the gcc container from the make container.
Below are some of my thoughts which I haven’t actually implemented, feel free to rate them good or bad.
Using sshd in the containers, but this is not a good design according to a few articles, and the containers are not going to be minimal.
Use a user defined network. I understand that this enables the containers to be able to find each other through their IP but I don’t understand how to call gcc after that.
Maybe having a web server where port X runs the gcc process. But again this seems like overkill, and the first option seems more lightweight to this one.
Using docker in docker (I am not sure how I would achieve this).
Doing a very unsafe procedure of publishing the docker socket (whose implications I am not quite clear about but everyone seems to think that this is a very bad idea).
Any thoughts would be appreciated