First of all, I am new to Docker. (installed it yesterday for the first time)
I have some questions about how Docker is used in practice. Let me first describe how software is currently developed and deployed in our company:
Software is developed on a development machine and then committed to a branch in SVN.
The developer than notifies an administrator that a new software package is available and that it should be tested.
The administrator then logs onto a test-maschine and types in “svn co svn.mycompany.com/project/branches/awesome-new-feature”.
Besides that the administrator also creates cache-directories, installs php modules etc. There’s is much more to it but you get the point.
My first questions are:
a) Am I, as a developer, responsible for creating a new Dockerfile for every (project, version) that is meant to be deployed? (this includes to tag it such as “project1:my-awesome-feature”)
b) Should the Dockerfile checkout the source code using the RUN command or should the source code be copied from the context?
c) We have around 50 web applications in our company. Is it best practise to include the apt-get install -y apache2 subversion instruction in every Dockerfile or should I create a base image that is used by all web applications?
What you describe sounds like an incredibly manual process to me. Most of the steps you describe–check out source code, create cache directories, add PHP modules–I’d expect to be done in a script, or by your continuous-integration (build) system.
I’d expect the Dockerfile to include a lot of those (scripted) steps, RUN mkdir /cache and so forth. If someone else runs the deployment process completely by hand, they probably have to create the Dockerfile themselves.
That aside, the Dockerfile itself is (in my experience) usually treated as source code and checked in, in much the same way as your Makefile/setup.py/Gruntfile/… are. So you’d only have one Dockerfile per project. Whoever or whatever creates the actual Docker images (runs docker build -t ...) would have to provide appropriate image names and tags; and again I’d frequently expect this to be totally automated.
It should be copied from the context. The practical reason for this is that Docker assumes that RUN commands from identical starting points produce identical results, so if you RUN svn co ..., the second time you run it you will get the exact same image as the first time, even if there’s been a new commit in the meantime. An operational reason for this is that your application proper (presumably) doesn’t depend on Subversion, so if the source code is checked out externally, you don’t need to install Subversion into your image.
A practical way to accomplish this is to just put the Dockerfile in the root directory of your source tree.
Creating a reusable base image is more effort, and in the simple case of installing a small package like Apache HTTPD, I wouldn’t necessarily bother. (My product includes some multiple-gigabyte components, and a shared base image is essential to keeping the overall image size under control.)