Docker Community Forums

Share and learn in the Docker community.

Docker interlock not detecting events apart from controller node

ucp

(Hmaeck) #1

Hello,

I’ve got an UCP setup and I’ve set up interlock and nginx using the docker-compose-v1.1.yml file. But the problem is, interlock is only detecting when an event happens on my controller node. So if I launch a container on another node, nothing happens on the docker-compose logs, but when I launch a container on the controller host, I can see the event in docker-compose logs. I find this very strange and I’ve got no idea what could be wrong. I had some trouble setting up interlock ( https://forums.docker.com/t/trouble-setting-up-loadbalancer-with-ucp ) and now this :frowning:

Is this caused by the fact I don’t have one controller per node? Wouldn’t that be strange because all the containers are created through the controller if I understand correctly. (I launched the app with docker-compose from the cli with the client-bundle)

Here is an overview of my setup (I’ve got only 1 controller and 3 nodes, all nodes are running boot2docker)

[root@localhost didata-offiste-vote-app]# docker info
Containers: 22
 Running: 18
 Paused: 0
 Stopped: 4
Images: 42
Server Version: swarm/1.1.3
Role: primary
Strategy: spread
Filters: health, port, dependency, affinity, constraint
Nodes: 3
 esxdockerengine1: 192.168.123.14:12376
  └ Status: Healthy
  └ Containers: 8
  └ Reserved CPUs: 0 / 8
  └ Reserved Memory: 0 B / 64.42 GiB
  └ Labels: executiondriver=native-0.2, kernelversion=4.1.19-boot2docker, location=on_premise_BE, operatingsystem=Boot2Docker 1.10.3 (TCL 6.4.1); master : 625117e - Thu Mar 10 22:09:02 UTC 2016, provider=vmwarevsphere, storagedriver=aufs, target=apps, type=controllers
  └ Error: (none)
  └ UpdatedAt: 2016-04-22T12:26:09Z
 esxdockerengine2: 192.168.123.15:12376
  └ Status: Healthy
  └ Containers: 7
  └ Reserved CPUs: 0 / 8
  └ Reserved Memory: 0 B / 64.42 GiB
  └ Labels: executiondriver=native-0.2, kernelversion=4.1.19-boot2docker, location=on_premise_BE, operatingsystem=Boot2Docker 1.10.3 (TCL 6.4.1); master : 625117e - Thu Mar 10 22:09:02 UTC 2016, provider=vmwarevsphere, storagedriver=aufs, target=apps, type=secondary
  └ Error: (none)
  └ UpdatedAt: 2016-04-22T12:26:32Z
 esxdockerengine3: 192.168.123.39:12376
  └ Status: Healthy
  └ Containers: 7
  └ Reserved CPUs: 0 / 8
  └ Reserved Memory: 0 B / 64.42 GiB
  └ Labels: executiondriver=, kernelversion=4.1.19-boot2docker, location=on_premise_BE, operatingsystem=Boot2Docker 1.11.0 (TCL 7.0); HEAD : 32ee7e9 - Wed Apr 13 20:06:49 UTC 2016, provider=vmwarevsphere, storagedriver=aufs, target=loadbalancer, type=loadbalancing
  └ Error: (none)
  └ UpdatedAt: 2016-04-22T12:25:50Z
Cluster Managers: 1
 192.168.123.14: Healthy
  └ Orca Controller: https://192.168.123.14:443
  └ Swarm Manager: tcp://192.168.123.14:3376
  └ KV: etcd://192.168.123.14:12379
Plugins:
 Volume:
 Network:
Kernel Version: 4.1.19-boot2docker
Operating System: linux
Architecture: amd64
CPUs: 24
Total Memory: 193.3 GiB
Name: ucp-controller-esxdockerengine1

Trouble setting up loadbalancer with UCP
(Nicolaka) #2

when you run docker events against the swarm manager or the ucp controller and you spin up/down containers on ucp nodes, do you see events ?


(Hmaeck) #4

I don’t have access to the cluster right now (weekend :wink: ) I’ll try it monday, thanks for helping me :slight_smile:

I’m here with an update:

On the machine which I installed all my nodes through (with docker-machine and the ucp client bundle) I ran docker events.
I launched a container (via the UI) on the controller node. And I saw it getting updated in my console. Then I launched another container on another node (NOT the controller node and NOT the loadbalancing node). I also saw it getting updated in the console. This was the output.

# docker events
2016-04-25T09:18:26.523205176Z network connect 42a41122d5b0e79fb93d7d7818c395a7ff330181538727e4119741b00900bc88 (node.addr=192.168.123.14:12376, node.id=PGT6:WFVV:7QN5:BD4J:HXRW:K7GO:W33M:5TLT:ODKO:ZMRC:YYQB:UOLY, node.ip=192.168.123.14, node.name=esxdockerengine1, type=overlay, container=138bc70eab418f76626f5c8013c76b62e78b57b4f29d0414874c8b5dfef82417, name=PacificOverlay)
2016-04-25T09:18:26.536494605Z container start 138bc70eab418f76626f5c8013c76b62e78b57b4f29d0414874c8b5dfef82417 (com.docker.ucp.access.owner=admin, image=dtr.local/user/ubuntucustom, node.id=PGT6:WFVV:7QN5:BD4J:HXRW:K7GO:W33M:5TLT:ODKO:ZMRC:YYQB:UOLY, node.ip=192.168.123.14, com.docker.swarm.constraints=[“type!=secondary”], com.docker.swarm.id=b5cbc006bdb493b724c4809390337fb2d2b086ed2c28a36d7474833bf1632aeb, name=Dolphin, node.addr=192.168.123.14:12376, node.name=esxdockerengine1)
2016-04-25T09:18:27.174478390Z container restart 138bc70eab418f76626f5c8013c76b62e78b57b4f29d0414874c8b5dfef82417 (com.docker.swarm.id=b5cbc006bdb493b724c4809390337fb2d2b086ed2c28a36d7474833bf1632aeb, com.docker.ucp.access.owner=admin, name=Dolphin, node.id=PGT6:WFVV:7QN5:BD4J:HXRW:K7GO:W33M:5TLT:ODKO:ZMRC:YYQB:UOLY, node.ip=192.168.123.14, com.docker.swarm.constraints=[“type!=secondary”], image=dtr.local/user/ubuntucustom, node.addr=192.168.123.14:12376, node.name=esxdockerengine1)
2016-04-25T09:18:56.280844715Z network connect 42a41122d5b0e79fb93d7d7818c395a7ff330181538727e4119741b00900bc88 (container=0ccda03dc379e47b924ee6d27e11e37b50b1cc85aa5e647ac01d0f12979eeb55, name=PacificOverlay, node.addr=192.168.123.15:12376, node.id=ZVDS:XMHH:H5WE:YQMR:C2ZX:FR4F:UYAY:IMCU:4UXS:CDJJ:HZLV:3ISC, node.ip=192.168.123.15, node.name=esxdockerengine2, type=overlay)
2016-04-25T09:18:56.281872688Z container start 0ccda03dc379e47b924ee6d27e11e37b50b1cc85aa5e647ac01d0f12979eeb55 (node.ip=192.168.123.15, node.name=esxdockerengine2, com.docker.swarm.id=cc16d2bf425d1f3852890f67360d66cc34204bcfe8aa320f2379c7939b997787, com.docker.ucp.access.owner=admin, image=dtr.local/user/ubuntucustom, name=Orca, node.addr=192.168.123.15:12376, node.id=ZVDS:XMHH:H5WE:YQMR:C2ZX:FR4F:UYAY:IMCU:4UXS:CDJJ:HZLV:3ISC)
2016-04-25T09:18:57.178933933Z container restart 0ccda03dc379e47b924ee6d27e11e37b50b1cc85aa5e647ac01d0f12979eeb55 (name=Orca, node.addr=192.168.123.15:12376, node.id=ZVDS:XMHH:H5WE:YQMR:C2ZX:FR4F:UYAY:IMCU:4UXS:CDJJ:HZLV:3ISC, node.ip=192.168.123.15, node.name=esxdockerengine2, com.docker.swarm.id=cc16d2bf425d1f3852890f67360d66cc34204bcfe8aa320f2379c7939b997787, com.docker.ucp.access.owner=admin, image=dtr.local/user/ubuntucustom)

So docker events is registring the updates, but I don’t see the updates in docker-compose logs when something happens on a node other than the UCP/controller node.

NOTE:
When I ssh into the UCP controller node and I run docker events, I only see the events taking place on the controller node. So when I launch a container on the controller-node I see it, but I won’t see it when I launch a container on the loadbalancer. The loadbalancer shows the same bihavior, so I guess that’s normal.

NOTE2:
Also when I launch the sites I want to loadbalance on the UCP controller node and they get detected by interlock, I’m not able to access them.
I added vote.mycompany.local to/etc/hosts but all I get when i open a browser to that address is the nginx homepage :frowning:

interlock_1 | time=“2016-04-26T07:57:05Z” level=debug msg=“inspecting container: id=5de20f771e01e62c50a1bc3f33b201015d490ac6e1c0795474b1bdf40e10a523” ext=lb
interlock_1 | time=“2016-04-26T07:57:05Z” level=debug msg=“checking container labels: id=5de20f771e01e62c50a1bc3f33b201015d490ac6e1c0795474b1bdf40e10a523” ext=lb
interlock_1 | time=“2016-04-26T07:57:05Z” level=debug msg=“checking container ports: id=5de20f771e01e62c50a1bc3f33b201015d490ac6e1c0795474b1bdf40e10a523” ext=lb
interlock_1 | time=“2016-04-26T07:57:05Z” level=debug msg=“container is monitored; triggering reload: id=5de20f771e01e62c50a1bc3f33b201015d490ac6e1c0795474b1bdf40e10a523” ext=lb
interlock_1 | time=“2016-04-26T07:57:05Z” level=debug msg=“triggering reload” ext=lb
interlock_1 | time=“2016-04-26T07:57:05Z” level=debug msg="event received: status=start id=5de20f771e01e62c50a1bc3f33b201015d490ac6e1c0795474b1bdf40e10a523 type=container action=start"
interlock_1 | time=“2016-04-26T07:57:05Z” level=debug msg="notifying extension: lb"
interlock_1 | time=“2016-04-26T07:57:05Z” level=debug msg=“inspecting container: id=5de20f771e01e62c50a1bc3f33b201015d490ac6e1c0795474b1bdf40e10a523” ext=lb
interlock_1 | time=“2016-04-26T07:57:05Z” level=debug msg=“checking container labels: id=5de20f771e01e62c50a1bc3f33b201015d490ac6e1c0795474b1bdf40e10a523” ext=lb
interlock_1 | time=“2016-04-26T07:57:05Z” level=debug msg=“checking container ports: id=5de20f771e01e62c50a1bc3f33b201015d490ac6e1c0795474b1bdf40e10a523” ext=lb
interlock_1 | time=“2016-04-26T07:57:05Z” level=debug msg=“container is monitored; triggering reload: id=5de20f771e01e62c50a1bc3f33b201015d490ac6e1c0795474b1bdf40e10a523” ext=lb
interlock_1 | time=“2016-04-26T07:57:05Z” level=debug msg=“triggering reload” ext=lb
interlock_1 | time=“2016-04-26T07:57:06Z” level=debug msg="reaping key: reload"
interlock_1 | time=“2016-04-26T07:57:06Z” level=debug msg=“triggering reload from cache” ext=lb
interlock_1 | time=“2016-04-26T07:57:06Z” level=debug msg=“checking to reload” ext=lb
interlock_1 | time=“2016-04-26T07:57:06Z” level=debug msg=“updating load balancers” ext=lb
interlock_1 | time=“2016-04-26T07:57:06Z” level=debug msg=“generating proxy config” ext=lb
interlock_1 | time=“2016-04-26T07:57:06Z” level=debug msg=“websocket endpoints: []” ext=nginx
interlock_1 | time=“2016-04-26T07:57:06Z” level=debug msg=“alias domains: []” ext=nginx
interlock_1 | time=“2016-04-26T07:57:06Z” level=info msg=“results.mycompany.local: upstream=0.0.0.0:32769” ext=nginx
interlock_1 | time=“2016-04-26T07:57:06Z” level=debug msg=“websocket endpoints: []” ext=nginx
interlock_1 | time=“2016-04-26T07:57:06Z” level=debug msg=“alias domains: []” ext=nginx
interlock_1 | time=“2016-04-26T07:57:06Z” level=info msg=“vote.mycompany.local: upstream=0.0.0.0:32768” ext=nginx
interlock_1 | time=“2016-04-26T07:57:06Z” level=debug msg=“proxy config path: /etc/nginx/nginx.conf” ext=lb
interlock_1 | time=“2016-04-26T07:57:06Z” level=debug msg=“saving proxy config” ext=lb
interlock_1 | time=“2016-04-26T07:57:06Z” level=debug msg=“signaling reload” ext=lb
interlock_1 | time=“2016-04-26T07:57:06Z” level=debug msg=“triggering proxy network cleanup” ext=lb
interlock_1 | time=“2016-04-26T07:57:06Z” level=info msg=“reload duration: 63.30ms” ext=lb
interlock_1 | time=“2016-04-26T07:57:06Z” level=debug msg=“checking to remove proxy containers from networks” ext=lb

The nginx.conf file looks like this for my loadbalancer, I can’t find the entries toresults.mycompany.local
and vote.mycompany.local :frowning:

user nginx;
worker_processes 1;

error_log /var/log/nginx/error.log warn;
pid /var/run/nginx.pid;

events {
worker_connections 1024;
}

http {
include /etc/nginx/mime.types;
default_type application/octet-stream;

log_format  main  '$remote_addr - $remote_user [$time_local] "$request" '
                  '$status $body_bytes_sent "$http_referer" '
                  '"$http_user_agent" "$http_x_forwarded_for"';
access_log  /var/log/nginx/access.log  main;
sendfile        on;
#tcp_nopush     on;
keepalive_timeout  65;
#gzip  on;
include /etc/nginx/conf.d/*.conf;

}


(Evan) #5

FYI continuing issue on interlock repo here: https://github.com/ehazlett/interlock/issues/127


(Windwolf) #6

Hi, I have a question, interlock can’t create nginx config for nginx contianer, pls give me a hand, thank you.
interlock’s config

config.toml
root@swarm1:~# ls
config.toml
root@swarm1:~# cat config.toml
ListenAddr = ":8080"
DockerURL = “tcp://192.168.56.32:3376”

[[Extensions]]
Name = "nginx"
ConfigPath = "/etc/conf/nginx.conf"
PidPath = "/etc/conf/nginx.pid"
MaxConn = 1024
Port = 80

interlock

docker run
-P
-d
-ti
-v nginx:/etc/conf
-v /var/run/docker.sock:/var/run/docker.sock
-v $(pwd)/config.toml:/etc/config.toml
–name interlock
ehazlett/interlock:1.1.0
-D run -c /etc/config.toml

nginx

docker run -ti -d
-p 80:80
–label interlock.ext.name=nginx
–link=interlock:interlock
-v nginx:/etc/conf
–name nginx
nginx nginx -g “daemon off;” -c /etc/conf/nginx.conf