Building application in a separate Docker container

I’m thinking out loud, so please tell me where my thinking doesn’t make sense:

We currently have two Dockerfiles per project. There is one that we use as a “build environment” - it has things like gcc, *-dev libraries and so on. The image is quite large.

The second image is a stripped down production/runtime-ish image: it’s basically just alpine, a few dependencies and the built application artifact from the first container COPYed in.

In our CI pipeline we:

  • docker build the build-env image with all the build toolchain stuff and the source code for the app.
  • docker cp the binary artifact out of the container of the first image.
  • docker build a second Alpine-based image and COPY in the binary artifact.

The second image is tiny and great for uploading to our prod servers.

Our problem is that this process doesn’t play well with how docker-compose works. Compose expects a single Dockerfile that takes all the source code and ends up in a runnable image.

We’d like to minimise surface area for potential attackers, which is why we don’t have unnecessary things like gcc in our prod images. Is this an unnecessary worry? Does our workflow suck? Any thoughts / opinions would be very welcome!

2 Likes

This is exactly where I’m stuck as well. I have a docker “build” image that I copy my source code into and install the development toolchain so I can build my source code into the build artifact. Next I want to get that build artifact into a second docker image that just runs the application without any of the development tool cruft.

I thought volumes was the answer, because I could have “build” and “serve” services defined in docker-compose.yaml, and then use a named volume to share the compiled code from the build container to the serve container. This doesn’t work because updating the “build” containers image, doesn’t updated the files in the named volume. So the “serve” container always sees the old code. I can hack around this by destroying the named volume and letting docker re-create it, but I don’t understand how that step would fit into a deployment pipeline.

I have a feeling we are both taking a fundamentally incorrect approach, because all my googling is not turning up anything helpful. Hopefully someone can point us in the right direction.

Life got a lot easier when I just decided to have the build happen inside my serve container. I suppose you could add another layer to serve to purge the build tools and clean up the image.

I saw a suggestion somewhere that you could have your build container upload the build artifact to an external server during the build process, (i.e. push it to s3), and then pull that artifact from the service which requires it, but it would be up to you to handle the authorization and use some kind of convention to handle the case where multiple builds might be occurring simultaneously. This doesn’t seem like a reliable solution to me.

Hi,

I’m stuck in the same place as well.

In my case I have a React app made with create-react-app that gets built with Node.js and served with NGINX. So in this case the dependencies install and app building (npm run build) are done in a Dockerfile.build and the resulting artifacts are feed to Dockerfile that serves the app with NGINX.

This really does not play well with docker compose because it expects a single Dockerfile.

Any help here?

+1, I have this problem too, hope there will be a solid solution!

Great news everyone! PRs have landed in Docker that solve this exact problem. Here’s a helpful blog post explaining it: http://blog.alexellis.io/mutli-stage-docker-builds/