Compile and copy to image or copy source to container and compile on build

OK let’s assume we have a web-app that can be ‘compiled’ such as minifiying css and all that good stuff.

Is the general consensus to:

  1. Do the build on the developers machine (e.g. npm compile) and then have the Dockerfile copy from the /dist (result of the compile) to the image or…
  2. Copy the full apps /src to the image and run ‘npm compile’ on the actual image

If I’m thinking of going from github -> running app as quickly as possible (not that this is always the goal) then I want to be able to do git clone xxxx && docker up so that means option 2 I suppose.

I guess what I’m struggling with is does anyone have the notion of a ‘build’ container, like a container whose sole job it is to perform the build of the application and then put the result in a VOLUME for other containers to use? Most of the examples I see revolve around containerizing something that is ready for running rather than dealing with ‘source’.

For some components, that’s what I do. The result gets put in the current directory, which then becomes the input to a second docker build step. Then our actual runtime containers don’t need to have, for instance, a full C toolchain just to run Python libraries that happen to have C extensions.

I don’t feel like there’s a consensus. On the forum I see a lot of option #2. But, that depends on wanting to distribute your application’s original source as part of your image. (I also see the variant where the image doesn’t actually contain the application, just a runtime of some sort, and it heavily depends on using docker run -v to add in the application.)

If you’re used to having a more involved build process, option #1 (with some help outside the Dockerfile) feels fairly natural.