How to build C image in a container

Hi, I am a newbie with docker. I read the get-started docs at docs.docker.com, and found the example on how to build, tag, push, and pull the image for the python code “Hello World”. Now I have a C code and would like to do the same thing. How should I do it? Any doc on this?

Thanks.

I would highly recommend setting up your host system to be able to build the application, then COPY the resulting binary in a Dockerfile. If you have shared-library dependencies, make sure to also install those via your Linux distribution’s package manager, RUN apt-get install ... or RUN yum install .... If the application is more than a single binary, hopefully its build system supports something like the make install DESTDIR=… convention which will let you make a tar file of the installed application, which you can then ADD.

I would not recommend trying to add a C compiler to your Docker image, because (a) the full toolchain is likely to be much larger than your application and (b) you’ll also be distributing your source code with the application.

(The very new multi-stage builds would fit well here: you could have a first image that installed a toolchain, COPYed the source tree, and built it, and then a second image that took the binary out of it. But, it requires a very very new Docker, and IME the feature hasn’t been out long enough that an arbitrary externally-maintained Docker environment like a cloud-hosted CI system is especially likely to have it.)

Thanks so much for your quick response, David.As I said I am new to Docker, so are there some documents or examples that I can follow, similar to those in docs.docker.com?
Thanks,
Han

I’m sorry, but there are many assumptions in your statements, and they presume one is not following best practices (all of which are WAY beyond the scope of his question).

[quote=“dmaze, post:2, topic:37668”]
I would not recommend trying to add a C compiler to your Docker image, because …the full toolchain is likely to be much larger than your application…you’ll also be distributing your source code with the application.[/quote]

He did not ask this. Yet you chose not to directly answer his question, and instead present this oh so limited perspective. Sorry, where I work (and everyone I know) prefers to setup build chains in a container. Just like you so strongly warn against.

You should Google this topic and understand it, but basically I’ll give you a summary: add the toolchain into a docker container. Then the docker run command MOUNTS (bind mount) a dir on the host OS, say: ~/code-out/. Have you builds put the compiled code THERE, and it’s never IN the container.

This is what thepopular pbuilder system did, way before Docker was cool: recreate a “clean-room” build environment which is not influenced by the host OS.

Of course, one wouldn’t distribute their build container like you have assumed he would. You just wouldn’t.

He doesn’t specify his desired manner of distribution. Why rail against using containers because he MIGHT distribute a container and MIGHT include unwanted source code in it? The same farfetched scenario exists with tarballs, rpm, and debs.

If you’re assuming he wants to distribute his C code in a minimal container, with nothing else (no code, no build env) then he should have been told to do just that. A runtime container can accept the resulting binary. Yes, that means more than one container, but that’s a VERY common use case. See Docker Compose…

The guys I know who built their dev env ON their host OS, are still stuck on Trusty (Ubuntu 14, 2014) because of that decision. Take a few extra minutes to set your build system up right, and you have something both portable and completely independent of the host OS. Exactly what Docker was made FOR. :slight_smile: