How to pass files to containers in a Swarm cluster

Hi,

I am trying to create something with docker, swarm and consul. Basically a dynamic, three-tier web service infrastructure (nginx reverse proxy, in-house app server, mongodb).

The point of this is the ability to move app servers around, duplicate them, shut some down etc based on demand, which forces me to occasionally reconfigure nginx.

I am using Swarm to create a cluster, and execute all my commands on Swarm master letting Docker and Swarm place my containers to physical hosts, with some constraints not relevant to this.

I can script nginx.conf creation on master, but if I have a running container somewhere on a different physical host, on random cloud IP address, how do I easiest copy files to it and force nginx there to reload? docker cp does not seem to do anything as swarm master is a different physical host from the node running nginx container. I just seem to be getting 404 page not found errors.

I have done all this without Docker, but it gets cumbersome as I need to manage the IP addresses of physical hosts myself. I am trying docker to help me get rid of this, but is this a false hope?

Hannu

There are a few approaches to this solution.

You mentioned that you are using docker cp to copy files into the container. This functionality was added in docker 1.8.x. If you are using 1.7.x, then you won’t be able to copy files into a container’s filesystem this way.

Another solution would be to abstract this configuration setup into a service discovery layer, and have your nginx container handle that discovery for you. There are several mechanisms in the docker community for service discovery. here are a few approaches.

One of the features that are being worked on currently to help support the type of workflow you are trying to accomplish are the new networking model due to come out of experimental in 1.9.x. This will allow multi-host networking, and cross-host links in swarm clusters.

The other new feature that might help is the volumes backend drivers-- you could potentially put data in a network filesystem, and specify that filesystem as the volume at runtime. As long as that location is accessible from each docker host in your cluster, you wouldn’t have to individually copy things to each host.

1 Like