Docker Community Forums

Share and learn in the Docker community.

Docker 1.12.0-rc3 swarm mode - doubts about networking and communication between containers

beta
docker

(Dstolf) #1

Hi everyone. I’m trying to learn Docker and decided to start on 1.12.0-rc3, so I can leverage the new swarm mode features.

Ok, so I have the following Docker swarm configuration:

[dstolf@VINDOC701 compose]$ docker node ls
ID HOSTNAME MEMBERSHIP STATUS AVAILABILITY MANAGER STATUS
01h8iq4kxdlibdw9mdx394jd0 * VINDOC701.dev1.local Accepted Ready Active Leader
510vgjvdvq9kk5ufpzb44p7iu VINDOC703.dev1.local Accepted Ready Active
7oq0krop57wfebcgsr8t8bk4t VINDOC702.dev1.local Accepted Ready Active

I have two Docker Images I’ve built, one for httpd with mod_cluster, and another with Wildfly 10. When I run them on my local machine, they work fine and when I query mod_cluster_manager, it shows all the Wildfly containers running, and it also load balance http access to node-info app:

dstolf@cherno-alpha:~/docker/wildfly$ docker run -d -p 8080:80 centos/httpd/modcluster
1b2de0f2c0d9867b39246d3cd22efb56676214099d494d6c4106a1a5b511961a

dstolf@cherno-alpha:~/docker/wildfly$ docker run -d jboss/wildfly/app
d60c191a7b72b17690310c6d41e0fb791d20372e54aaf0591f406d659511bb1f

dstolf@cherno-alpha:~/docker/wildfly$ docker run -d jboss/wildfly/app
5bb68830d6bc33210e384989cd8f1233fdaac54ef240c61fd11f2551d6052b4d

dstolf@cherno-alpha:~/docker/wildfly$ docker run -d jboss/wildfly/app
3f234b5174a03d1ff4c2b95d11e4f57a0b44ce479bbefc70e2a3bced03eb7b2f

dstolf@cherno-alpha:~/docker/wildfly$ curl http://localhost:8080/mod_cluster_manager
<!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 3.2 Final//EN">
<html><head>
<title>Mod_cluster Status</title>
</head><body>
<h1>mod_cluster/1.3.1.Final</h1><a href="/mod_cluster_manager?nonce=bbe3e1d0-4053-47f1-9b4d-ded29e73874b&refresh=10">Auto Refresh</a> <a href="/mod_cluster_manager?nonce=bbe3e1d0-4053-47f1-9b4d-ded29e73874b&Cmd=DUMP&Range=ALL">show DUMP output</a> <a href="/mod_cluster_manager?nonce=bbe3e1d0-4053-47f1-9b4d-ded29e73874b&Cmd=INFO&Range=ALL">show INFO output</a>
<h1> Node server-172.17.0.3 (ajp://172.17.0.3:8009): </h1>
....

But when I go to my test servers and run the same images on Docker Swarm, Wildfly containers don’t register on httpd/modcluster

docker service create --replicas 1  -p 3000:80    --mount target=/data,source=/data,type=bind,writable=true --name load_balancer centos/httpd/modcluster
docker service create --replicas 3  --mount target=/data,source=/data,type=bind,writable=true  --name java_app       jboss/wildfly/app

Is it something wrong I did when provisioning the containers, some miss configuration on Wildfly… Can anybody help?? It seems that there’s no communication between containers.

#httpd/modcluster :

Dockerfile

FROM centos/httpd
USER root

# Disable mod_proxy_balancer module and enable mod_cluster
RUN sed -i 's|LoadModule proxy_balancer_module|# LoadModule proxy_balancer_module|' /etc/httpd/conf.modules.d/00-proxy.conf 

# Add launch.sh 
ADD launch.sh /
ADD template_modcluster /
RUN chmod +x /launch.sh

# add mod_cluster 
ADD mod_advertise.so /etc/httpd/modules/
ADD mod_manager.so /etc/httpd/modules/
ADD mod_proxy_cluster.so /etc/httpd/modules/
ADD mod_cluster_slotmem.so /etc/httpd/modules/

# Add mod_cluster.conf 
ADD mod_cluster.conf /etc/httpd/conf.d/mod_cluster.conf
      
# Do the required modifications and launch Apache after boot
ENTRYPOINT /launch.sh

Launch.sh:

#/bin/bash

# Get the Hostname
HOSTNAME=$(hostname)
# Adjust the IP addresses in the mod_cluster.conf file
IPADDR=$(hostname -i | awk '{print $2}')
sed -i "s|[0-9\.\*]*:80|$IPADDR:80|g" /etc/httpd/conf.d/mod_cluster.conf

cat template_modcluster >> /etc/httpd/conf.d/mod_cluster.conf

# Log run
echo "######## Start httpd modcluster ########" >> /data/httpd/httpd_${HOSTNAME}.log
hostname >> /data/httpd/httpd_${HOSTNAME}.log
echo $IPADDR >> /data/httpd/httpd_${HOSTNAME}.log
date >> /data/httpd/httpd_${HOSTNAME}.log

# touch error_log and access_log
touch /var/log/httpd/error_log
touch /var/log/httpd/access_log

# tail file and redirect to /data/httpd/
nohup tail -f /var/log/httpd/error_log >> /data/httpd/error_log_${HOSTNAME}.log &
nohup tail -f /var/log/httpd/access_log >> /data/httpd/access_log_${HOSTNAME}.log &
nohup tail -f /var/log/messages >>  /data/httpd/messages_${HOSTNAME}.log &

# dump final mod_cluster.conf 
echo "######## mod_cluster.conf   ########" >> /data/httpd/httpd_${HOSTNAME}.log
cat /etc/httpd/conf.d/mod_cluster.conf >> /data/httpd/httpd_${HOSTNAME}.log

# run httpd on foreground
httpd -D FOREGROUND >> /data/httpd/httpd_${HOSTNAME}.log
echo "######## End httpd modcluster   ########" >> /data/httpd/httpd_${HOSTNAME}.log 

Template_modcluster:

<VirtualHost *:80>

  EnableMCPMReceive true
  ServerAdvertise On
  ServerName loadbalancer

  <Location />
    Require all granted
  </Location>

  <Location /mod_cluster_manager>
    SetHandler mod_cluster-manager
    Require all granted
  </Location>

</VirtualHost>

#jboss/wildfly

FROM jboss/wildfly
USER root
# ADD app to deployment folder
ADD node-info.war /opt/jboss/wildfly/standalone/deployments/


# ADD launch script standalone ha mode
ADD launch_standalone_ha.sh /

# Change launch script permission
RUN chown jboss:jboss /launch_standalone_ha.sh

# Change app files permission
RUN chown jboss:jboss /opt/jboss/wildfly/standalone/deployments/node-info.war

# Change launch scripts permissions
RUN chmod +x /launch_standalone_ha.sh

# add wildfly admin user
USER jboss
RUN /opt/jboss/wildfly/bin/add-user.sh admin Admin@123 --silent

# Run WildFly after the container boots
ENTRYPOINT /launch_standalone_ha.sh

launch_standalone_ha.sh

hostname -I > /data/wildfly/standalone_ha_`hostname`.log

echo "/opt/jboss/wildfly/bin/standalone.sh -c standalone-ha.xml -Djboss.bind.address=$IPADDR -Djboss.bind.address.management=$IPADDR -Djboss.node.name=server-$IPADDR" >> /data/wildfly/standalone_ha_`hostname`.log
/opt/jboss/wildfly/bin/standalone.sh -c standalone-ha.xml -Djboss.bind.address=$IPADDR -Djboss.bind.address.management=$IPADDR -Djboss.node.name=server-$IPADDR >> /data/wildfly/standalone_ha_`hostname`.log

## tried to launch wildfly on all network interfaces, but it also didn't work
#echo "/opt/jboss/wildfly/bin/standalone.sh -c standalone-ha.xml -Djboss.bind.address=0.0.0.0 -Djboss.bind.address.management=0.0.0.0 -Djboss.node.name=server-$IPADDR" >> /data/wildfly/standalone_ha_`hostname`.log

(Dstolf) #2

PS. All the images are store on my own registry, I supressed that from the Docker files.

Here’s the results from inspecting the services:

[dstolf@VINDOC701 wildfly]$ docker service inspect java_app
[
    {
        "ID": "17qemi1ky7ktu369ldrlrdbek",
        "Version": {
            "Index": 189442
        },
        "CreatedAt": "2016-07-11T21:45:46.86307084Z",
        "UpdatedAt": "2016-07-11T21:45:46.863971609Z",
        "Spec": {
            "Name": "java_app",
            "TaskTemplate": {
                "ContainerSpec": {
                    "Image": "jboss/wildfly/app",
                    "Mounts": [
                        {
                            "Type": "bind",
                            "Source": "/data",
                            "Target": "/data",
                            "Writable": true
                        }
                    ]
                },
                "Resources": {
                    "Limits": {},
                    "Reservations": {}
                },
                "RestartPolicy": {
                    "Condition": "any",
                    "MaxAttempts": 0
                },
                "Placement": {}
            },
            "Mode": {
                "Replicated": {
                    "Replicas": 3
                }
            },
            "UpdateConfig": {},
            "EndpointSpec": {
                "Mode": "vip",
                "Ports": [
                    {
                        "Protocol": "tcp",
                        "TargetPort": 8080,
                        "PublishedPort": 8080
                    }
                ]
            }
        },
        "Endpoint": {
            "Spec": {
                "Mode": "vip",
                "Ports": [
                    {
                        "Protocol": "tcp",
                        "TargetPort": 8080,
                        "PublishedPort": 8080
                    }
                ]
            },
            "Ports": [
                {
                    "Protocol": "tcp",
                    "TargetPort": 8080,
                    "PublishedPort": 8080
                }
            ],
            "VirtualIPs": [
                {
                    "NetworkID": "b5eio6jvl0et457w8g66owe91",
                    "Addr": "10.255.0.8/16"
                }
            ]
        }
    }
]


[dstolf@VINDOC701 wildfly]$ docker service inspect load_balancer
[
    {
        "ID": "9kunvpnf5gmctm6prnt9efutc",
        "Version": {
            "Index": 189434
        },
        "CreatedAt": "2016-07-11T21:45:37.445856076Z",
        "UpdatedAt": "2016-07-11T21:45:37.446911386Z",
        "Spec": {
            "Name": "load_balancer",
            "TaskTemplate": {
                "ContainerSpec": {
                    "Image": "centos/httpd/modcluster",
                    "Mounts": [
                        {
                            "Type": "bind",
                            "Source": "/data",
                            "Target": "/data",
                            "Writable": true
                        }
                    ]
                },
                "Resources": {
                    "Limits": {},
                    "Reservations": {}
                },
                "RestartPolicy": {
                    "Condition": "any",
                    "MaxAttempts": 0
                },
                "Placement": {}
            },
            "Mode": {
                "Replicated": {
                    "Replicas": 1
                }
            },
            "UpdateConfig": {},
            "EndpointSpec": {
                "Mode": "vip",
                "Ports": [
                    {
                        "Protocol": "tcp",
                        "TargetPort": 80,
                        "PublishedPort": 3000
                    }
                ]
            }
        },
        "Endpoint": {
            "Spec": {
                "Mode": "vip",
                "Ports": [
                    {
                        "Protocol": "tcp",
                        "TargetPort": 80,
                        "PublishedPort": 3000
                    }
                ]
            },
            "Ports": [
                {
                    "Protocol": "tcp",
                    "TargetPort": 80,
                    "PublishedPort": 3000
                }
            ],
            "VirtualIPs": [
                {
                    "NetworkID": "b5eio6jvl0et457w8g66owe91",
                    "Addr": "10.255.0.6/16"
                }
            ]
        }
    }
]

(Dstolf) #4

Looks like it’s because the overlay network doesn’t support multicast