Something I am just not getting

Hi Everyone,
I have spent a few days trying to “really” understand docker.
I have done the tutorials and had a look through docker-hub etc.

I am pretty confident that I can create images / run containers, create and use a Dockerfile, too.
But one thing I am not getting is the mandatory requirements of an image / container.
I am not sure if I am just not reading something correctly - or if it actually isn’t mentioned - and thus there could be a documentation update to be had in all this;

Let me give you two parts - what I understand - what I am trying to do;
You have a host OS - let’s say Fedora 24.
Unlike VMWare - that needs to utilise a Full OS install for the HOST and the GUEST - wasting resources, docker doesn’t. In the tutorials - the diagrams do not show any OS being required “in” the container. they just shopw the container “on top” of the Host OS.

So when creating an image;
Do I have to first load an OS as my base image?
Let’s assume that I want a container for my IDE. So I can preconfigure it - install the prerequisites etc - have it a nice tidy box and then share it around my team - so we all have the same tools / set up in the same way.

My container needs the Java 8 JDK, eclipse, Subversion and a few others.

I can manually load them all of these - and subsequently can script them into a Dockerfile, too.
I can mount a “workspace” volume from the Host through the container etc.

What I am just not getting is;
(again via the tutorials / all the examples I have found on the “interweb”)
There seems to always be a;
FROM “insert favourite Linux flavour here”

Why is this needed? if the container is using the Host OS?
and if it is needed - (which it seems to be)…

Then I think it needs to be a little clearer, discreetly explained in the documentation,
the self-paced tutorials and the diagrams, too.

Eg. Do the guest and Host OSs need to match (it doesn’t seem like they do)
Does it make sense to use the “smallest / lightest” host or guest (or both) OS?

As always - thanks!

“OS” isn’t quite the right term here. Break that down into two parts. You have the Linux kernel, and whatever drivers and hardware support that provides; all of that is shared with the host and all other containers. Anything that has a filesystem path – starting with /lib/ and /bin/sh and working up from there – is bundled into each container.

Another complication that goes along with this is that “OS” tends to imply a stack of software on top of the kernel (an init system, for instance) that Docker containers usually don’t run.

Well, no, you can create an image FROM scratch, so long as the thing you’re loading into the image is statically linked and you won’t miss having basics like, say, a shell. I’ve mostly seen this setup around Go applications where it is reasonably straightforward to create a statically linked single-application bundle.

Containers never use anything from the host’s filesystem, unless it’s explicitly passed in (docker run -v). If you have a dynamically linked application, the image needs to include all of the libraries it depends on.

The flip side of this is, even though you’re running Fedora, my container that starts FROM ubuntu:16.04 and as its first step RUN apt-get install ... works just fine; it ignores that the host system uses RPM and yum.


There are solutions to run Linux containers on Mac, but both paths involve an intermediate stripped-down Linux VM. Containers are usually blocked from making any changes to the globally shared kernel (including loading modules and changing sysctl settings).

“Probably” – but: because of the way Docker shares things, if you have a bunch of things that are all built on the same Ubuntu base image, the system will only have one copy of that image, and that’s pretty efficient. I see several containers built on Alpine as a lighter-weight base, and the occasional thing with no distribution at all.

For the host system distribution, I’d use whatever has the tools you need and you feel comfortable administering.

For my development work, I tend to use Ubuntu 16.04 with the tools I need installed on that host (and to the extent that there are files that affect IDEs, they get checked into source control). Actually I can frequently get away with working directly on a Mac (with, for reasons you can readily find on the forum, Docker Machine to run Docker in a VM).

For my deployment work, we usually use Ubuntu 16.04 on hosts but also support CentOS 7, with the same containers. The containers are generally but not universally also built on Ubuntu 16.04. (Even if it’s a container I built myself on my Mac, that I’m deploying onto a CentOS 7 system.)

Thanks David…
That is much clearer for me!