Compiling C++ code in a container

Hi all,

I’m new to Docker and looking at deploying some C++ programs in containers. These are currently installed on a server and run as cronjobs and perform various maintenance tasks on our systems.

Since the code is in a git repository, I’m guessing this means that building the Docker container will involve installing GCC and git, cloning the git repo, compiling the code then installing it, all inside the container during the build process.

My two questions are - is this the best way to run a C++ program from source in a Docker container, and if so, is there any benefit to uninstalling GCC and deleting the source code after the program has been compiled and installed?

I’m not entirely familiar with how the caching system works, so I’m unclear whether uninstalling the compiler would result in a smaller container, or whether it would mean there are more layers in the container so it would actually make it larger (i.e. downloading the filesystem with the compiler present, then another overlay that sits on top of that and hides the compiler again.)

I’m thinking the alternative is to have a container dedicated to building the program into a (for example) .deb file, then the “real” container can just install the .deb with apt-get/dpkg and not worry about compilers and git.

Which method would be the better option?

Yes, this. Depending on your needs, it may be enough to save the filesystem tree produced by the make install DESTDIR=... step, or even the single binary produced by the build. (Static link it, and it can be a FROM scratch container on its own; this is a popular setup for Go-based containers.)

The layering system does exactly what you’re afraid of here, and if you’re space-sensitive, the C++ build toolchain can feel like a large space cost. So you’d wind up with, say, a layer for your base distribution, a big layer with g++/libc6-dev/build-essential/… installed, a layer that adds your application source, a layer that builds and installs your application; even if you added another layer on top of this that deleted the toolchain, all of the layers are still part of the image. (Try docker history.)

For advanced Docker points, you could build a container that built your application, then take the result out and installed it into a separate container that ran it. A good reason to do this (if you’re talking about .deb packages) is to have a straightforward way to build both Ubuntu 14.04 and 16.04 packages on a single system. (Even if that system isn’t running Ubuntu at all.)

That’s very informative and completely answers all my questions. Many thanks for your help!

I think this is what multi stage builds are for so you can do the compile and generate the app and then in the second stage use / execute the app and only the second stage becomes part of the final image.

Kevin McManus
www.launchworks.com

1 Like

How is cross building handled in Docker (x86/ARM)? Say for C++ source targeting two different target architects?

One image can’t have multiple target architectures since Docker does not allow multiple inheritance.

As far as I know, the official gcc image has support for all the targets architectures so if you need to cross-compile, I assume that you run the gcc with the same parameters as you do normally.

Good question. Need to look into this.

As long as you install the appropriate cross-compiler capable versions of GCC, there would be no problem cross-compiling inside a Docker container. It would work the same way as cross-compiling on the host.

This would be easier than my original problem, since my issue was how to clear out the compiler cruft when the time comes to run the resulting program, but if you’re cross compiling then you won’t be running the resulting program inside Docker, so that won’t be an issue.

1 Like