I am trying to create something with docker, swarm and consul. Basically a dynamic, three-tier web service infrastructure (nginx reverse proxy, in-house app server, mongodb).
The point of this is the ability to move app servers around, duplicate them, shut some down etc based on demand, which forces me to occasionally reconfigure nginx.
I am using Swarm to create a cluster, and execute all my commands on Swarm master letting Docker and Swarm place my containers to physical hosts, with some constraints not relevant to this.
I can script nginx.conf creation on master, but if I have a running container somewhere on a different physical host, on random cloud IP address, how do I easiest copy files to it and force nginx there to reload? docker cp does not seem to do anything as swarm master is a different physical host from the node running nginx container. I just seem to be getting 404 page not found errors.
I have done all this without Docker, but it gets cumbersome as I need to manage the IP addresses of physical hosts myself. I am trying docker to help me get rid of this, but is this a false hope?