Docker Community Forums

Share and learn in the Docker community.

Why multi stage?

docker

(Tioluiso) #1

Hello there

I have tried to look for this, and haven’t found. Sorry if it’s a duplicate. So here it goes:

I am a .NET developer. New to the docker world.

When it comes to create an image for my application, my approach would be compiling and Unit Tests in CI platform (outside any container), then create image from the output of the build. Then push those images. Then deploy the app and run some integration tests… You get it. That image wouldn’t use any SDK image, just the bare runtime.

Then I read that docker supports multi stage, which is a strategy to first get an image with the full SDK then do the build and unit tests inside, then create another image based on the runtime copying the binaries from the previous container.

I don’t get it. Is it supposed to be better than my approach? I believe the output would be the same, with the same size. An I believe my approach to be way easier.

Please tell me: what am I missing?

Thanks a huge lot


(Think) #2

in your approach you need to install the SDK on the CI platform. And if there are more .net-framework versions installed it’s everytime a guess which is specifically used for testing.
In the multi stage you do not need to install anything on the CI platform and the SDK image will just support a very specific .net version so you are sure it works with this version and can then use exactly this version for your production image.


(Tioluiso) #3

Wow. First of all, thanks for the fast response.

Yup. What you say is true. So by going multi stage, I would kinda abstract myself from the CI environment… Which is nice.

On the other hand, the CI environment would need to get some other artifacts from the build container, like the build log, test results, code coverage… And all of that would be more complex, no?


(David Maze) #4

:+1:

(A lot of Linux images don’t work this way: they do something like apt-get install build-essential that brings in a full C toolchain, which might be needed at build/install time, but definitely isn’t needed at runtime. This makes the images huge, as you might expect.)

How do you get a setup with a consistent SDK and library dependencies? There are a lot of ways to do this; I gather in the Windows world that having a dependency on a particular version of Visual Studio is a strong enough statement that you can count on the whole company having the same build environment and upgrading is a major endeavor. You could use a tool like Chef or Ansible to configure individual desktops and build systems; you could have a standard Vagrant VM image that bundled it all up.

Multi-stage Docker builds are another way to do the same thing you’re already doing, but fully in Docker space.

I think you’ve got the whole picture, and if what you’re doing now works, IMHO you’re following best practices for building lean images.