I’m new to docker and containers and an trying to understand how best to integrate in to your environment.
We currently use Ansible for environment configuration and deployment. Deployment I think is clear - rather than uploading a build artefact on to a specific server and deploying by copying/moving/symlinking files as we do right now with Ansible, am I right in thinking we would instead produce a container for that specific build that contained all our source and dependencies and could be directly deployed using something like docker?
Configuration I’m struggling to see how that looks in a docker/container world. Our Ansible scripts have been built up over a few years and nicely describe how each environment is configured. Do all the configuration steps essentially get replaced with Docker files? So for example, we have an Ansible role that gets applied to all webservers in application. It makes sure apache is installed, certain configuration files exist, some utilities, etc - would this be replaced by a single container that did the same set up in a docker file?
In the environment you describe, I’d set up my automated build system to produce Docker images, push them to a Docker registry (Docker’s, or quay.io, or Google’s or Amazon’s hosted offering, or run your own), and then use Ansible’s docker_container command to actually launch the images.
One reasonable way to think of what Docker does is isolating a built single program and its dependencies, with its own filesystem. For the most part you shouldn’t need to copy individual files or install dependencies, those should all be packed up in the image.
The Docker documentation makes a big deal out of Docker’s Swarm offering, but it is totally optional. Using Ansible to deploy individual containers on to known hosts is a perfectly reasonable setup. (If you already have a CI system and a deployment system based on a configuration management tool, I feel like Kubernetes has more uptake than Swarm or things like Hashicorp’s Nomad as a standalone Docker cluster manager.)
If you needed to make sure things like Apache modules are installed, you’d do that in the Dockerfile. In principle you can install other programs too, but the normal pattern for a container is that it runs a single program (usually a long-running server) then exits. Especially in a production environment it’s very unusual to run other programs in a running container. Also remember that containers routinely get deleted and anything that wasn’t in the image at startup time will get lost.
For configuration per se, if you can isolate the required configuration to a collection of files, then you can inject them into the container at run time using the docker run -v option (or the Ansible docker_containervolumes: option).
I feel like there are three important uses for docker run -v. One is to inject configuration. A second is to inject data: in the case of Apache you might have a server container that had all of your required modules built and a top-level configuration file that loaded them, but no virtual hosts, and then you would use docker run -v to push in a single virtual host configuration and the content you’re serving. A third is to read logs back out, mounting an initially empty directory over /var/log/apache2, though in practice you may find it better to plug Docker into a log-management pipeline (logstash is a very common first step).