Running a container/service on every node (but only 1 per node)

Given an unknown number of nodes, I’d like to be able to run 1 container on every node. This would include automatically creating a new container on any nodes that join the swarm, and not trying to “move” a container to a different node if one node should fail. I don’t see how that can be easily done with affinity/constraints. Is this something that is supported?

My use case it that I want to do a variety of different tasks for each host in the swarm cluster. Any ideas how that can be accomplished?

Yes, use “global” mode (that’s 1.12 Beta). Here’s an example:
docker service create --name vote --mode global -p 8080:80 instavote/vote

Cool, I see that now here:

do constraints/affinity also with with global mode? It seems constraints would, affinity doesn’t make sense though.

What I’m experiencing in a similar situation is that --mode global starts one container per node, and it does honor --constraint ‘node.hostname != dontrunonnodename’. Note though that when you create the service it looks like it want’s to run all nodes, including the != node, but once the service comes up, no container is started on the != node. You’ll see this if you docker ps on the constrained node. You get a container Allocated, but not Running.

Looks kinda like this:
masternode# docker service create --with-registry-auth --name worker --mode global --constraint ‘node.hostname != masternode’ --network overlay-net worker

masternode# docker service ps worker
ID NAME IMAGE NODE DESIRED STATE CURRENT STATE ERROR
ekmn0qpnp6yt9yejc3sv77f7e worker worker workernode1 Running Running 11 minutes ago
4fgtppcmygdl83in0q19sk8gg _ worker worker workernode2 Running Running 11 minutes ago
d6ilju6iiq4la64k6973n38oc _ worker worker masternode Running Allocated 12 minutes ago

masternode# docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
1d1d3ddae487 master:latest “dostuff” 16 minutes ago Up 16 minutes master