CI Best Practices: Building with docker build versus docker run

Hello all,

I’m currently doing some cleanup and refactoring of an open source project’s ci system (this one). Our current setup is likely quite a common one; we have Dockerfiles which create an image that contains our build environment, then we run a container from this image, map in our source directory as a volume, and run our build commands.

This generally works well, but it means if you have a set of complicated build commands you’ll need a script to actually perform the build within the container. You’ll also likely need a script to build the image and run the container. This is fine for CI, but if a user wants to reproduce the results locally they’ll have to understand how these scripts work before they can, for example run a unit test to reproduce a CI failure.

I was wondering if it’s a common practice to do the entire build at the image construction step, without even running a container. Instead of mapping in a volume with the source you could do an ‘ADD’ from the image. You’d have to start a container in the end to copy artifacts out, but the advantage is the only thing a user would have to do to reproduce CI results would be to run a ‘docker build -f’.

One concern I’d have with this approach is that continually building images on the CI server would eventually fill the disk space, or use all the inodes, etc. (possibly even when running periodic image/container cleans). Is this a valid concern with modern docker, or does it cleanup well after itself with most storage drivers today.

I’ve built basically this exact system before, and it was…fine. Maybe not with that many Dockerfiles; if this image basically only lives in your build environment, it might be more convenient to have one really really really big container with all the tools in it. If you can get the build down to essentially

docker run --rm -v $PWD:/build mybuildimage make

that’s reasonably comprehensible.

I think having the same toolchain be readily available to developers is a plus, but making that consistently available is tricky. (“Build inside Docker” introduces a giant raft of permissions and editor integration issues; you could use something like Ansible to install build dependencies, and then, say, Packer to build your CI image based on that, but that’s adding another tool with another complicated invocation.)

IMHO having multiple stages of Dockerfiles is pretty reasonable: in stage 1 you have a full toolchain available and build your application, in stage 2 you only have a runtime environment and the container is pretty minimal. Fairly recent versions of Docker even have a feature specifically for this.

…put the Dockerfile in the source tree, but yes, this is sensible and in the general spirit of Docker. (As common as it is, I generally think using docker run -v to inject your source code into a container is not a best practice.)

It’s just like building anything else on your CI server…yes, it will eventually consume all available space if you don’t clean up after it. Docker has it a little worse because its storage is outside the normal filesystem space. I don’t actually know how Jenkins is at managing this (probably if you use its Docker plugin it does something to clean up built images, but I don’t actually know). Last time I set this up myself we needed a cron job that ran docker rm and docker rmi to get around this.

1 Like