How to create a docker service for apache containers

I have been attempting to run a docker service for a basic apache container. The issue is that, due to the weird networking of docker services I have no idea how to view my containers in a browser. Here is the command I am using to create my service.

docker service create --replicas 6 --name apa-sim -p 30000:80 cohenaj194/apache-simple

My containers do not get ports on the host machine assigned to them and are not visible from a browser. However as I have found, this is a default part of docker services. Somehow they are all assigned to some other address and docker automatically load balances between the instances that exist on any one machine. Thats cool, but I really just want to know how to view my containers in a browser?

Note that a docker service inspect --pretty apa-sim gives the following output concerning the ports:

Ports:
 Name = 
 Protocol = tcp
 TargetPort = 80
 PublishedPort = 30000

The only way to view my containers in a browser is if they run on -p 80:80 and then only one container can be viewed at a time. It does not load balance between the containers.

Hi, did you find something about this? Or at least some source of information? I feel lack of information about docker service in general (especially v1.12).

Your service gets a port assigned to it (30000) on each of the nodes in your swarm. If you’re just running locally then you will only have a single manager (localhost). The routing mesh in Swarm means that if you hit any node in the swarm on port 30000 then the request will automatically be routed (and balanced) between any containers running the relevant service (on any node). This is taken care of for you behind the scenes.

I have just tested this and with a single (localhost) manager can hit localhost:30000 and see my web server.

Please can you give more details about your setup? (all the docker related commands you are running as part of this experiment?)

Thanks!

The solution was found here: https://feiskyer.github.io/2016/06/24/Play-with-docker-v1-12/

I have to create an overlay network first:

docker network create -d overlay mynet
docker service create –name nginx –replicas 5 -p 30000:80/tcp –network mynet nginx

However I have found that with this setup the containers are only visible occasionally through the port 30000 I have assigned and only through random nodes instead of all the nodes. Also it doesnt seem to work at all unless there are two containers on each node.

Here is a github post on the issue: https://github.com/docker/docker/issues/24531

I have also noticed the strange behavior that for some reason my containers are visible through port 80 of the host, even though it is not assigned to port 80.

Interesting, I don’t have to do any of that to get it working. Are you all local or distributed? How many nodes?

Also are you scaling down? Is that github issue actually related to what you’re experiencing?

Yes the github is related as the service will partially work when scaled to 2 or more containers per node and completely fail when under 2 containers per node. For reference: I am working in a private openstack cloud, ubuntu 14.04, on 6 machines distributed. 3 manager nodes that are drained and 3 client nodes that are active.

HOSTNAME      MEMBERSHIP  STATUS  AVAILABILITY  MANAGER STATUS
client2	Accepted    Ready   Active        
consul0	Accepted    Ready   Drain         Leader
consul1	Accepted    Ready   Drain         Reachable
client0	Accepted    Ready   Active        
client1	Accepted    Ready   Active        
consul2	Accepted    Ready   Drain         Reachable

Also when I can access my containers it is usually only through one node and that node is not load balancing the containers. When attempting to curl localhost:30000 of the node that does work, only the contents of one container will be visible and only on every other curl attempt.

Ok I resolved the issue. Now everything is working. It was caused by a versioning issue. My managers were running docker 1.12.0-rc3 and my clients were running 1.12.0-rc4. Updating my managers to 1.12.0-rc4 fixed the issue.

So the moral of the story is that docker versions are absolutely incompatible with each other.

1 Like

Sorry if i resume this post…

I’ve created 3 vm using docker-machine:

docker-machine create -d virtualbox manager1
docker-machine create -d virtualbox worker1
docker-machine create -d virtualbox worker2

these are theirs ip:

docker-machine ls
NAME      ACTIVE   DRIVER       STATE     URL                         SWARM   DOCKER    ERRORS
manager   -        virtualbox   Running tcp://192.168.99.102:2376                                   v1.12.6
worker1   -        virtualbox   Running   tcp://192.168.99.100:2376           v1.13.0-rc5  
worker2   -        virtualbox   Running   tcp://192.168.99.101:2376           v1.13.0-rc5   

Then docker-machine ssh manager1

and:

docker swarm init --advertise-addr 192.168.99.102:2377

then worker1 and worker2 join to the swarm.

Now i’ve created a overlay network as:

docker network create -d overlay skynet

and deployed a service in global mode (1 task for node):

docker service create --name http --network skynet --mode global -p 8200:80 katacoda/docker-http-server

And there is effectively 1 container (task) for node.

Now, i’d like accessind directly to my virtual host… or, at least, i’d like browsing directly my service’s container, because of i’d like developing a load balancer of my service with nginx. For doing that, in my nginx conf file, i’d like to point to a specific service’container (i.e. now i have 3 node (1 manager and 2 workers) in global mode, so i have 3 tasks running–>i’d like to choose one of these 3 containers). How can i do that?

I can point to my swarm nodes simply browsing to VM_IP:SERVICE_PORT,i.e:

192.168.99.102:8200

but there is still internal load balancing. I was thinking that, if i point to a specific swarm node, i’ll use container inside that specific node. But nothing, for now.