Logging drivers with docker swarm mode 1.12

I set up an ELK (elasticsearch, logstash & kibana) stack for centralized logs on a docker swarm mode cluster like this:

create an overlay network

docker network create -d overlay logging-network

run elasticsearch

docker service create --network logging-network --mount type=bind,source=/some/nfs/share/data,target=/usr/share/elasticsearch/data --name elasticsearch elasticsearch

run logstash

docker service create --name=logstash --network=logging-network -p 30000:12201/udp --mount type=bind,source=/some/nfs/share/logstash/logstash.conf,target=/config-dir/logstash.conf logstash logstash -f /config-dir/logstash.conf

Testing the logging mechanism like this:

docker service create --name nginx_demo --log-driver=gelf --log-opt gelf-address=udp://HOST_OR_IP_OF_SOME_CLUSTER_NODE:30000 -p 31000:80 nginx

… works as expected, as all nodes forward traffic on port 30000 to the right node. But trying to log through my own ‘logging-network’ directly like this:

docker service create --name nginx_demo --network logging-network --log-driver=gelf --log-opt gelf-address=udp://logstash:12201 -p 31000:80 nginx

… doesn’t work. Is this somehow possible?

1 Like

Hm, for the address udp://HOST_OR_IP_OF_SOME_CLUSTER_NODE:30000, I think you’d want to use port 30000 since that’s where the logstash service will be listening.

Could a similar example ( Fluentd-> ElasticSearch->Kibana) not using used logstash but used Fluentd and working fine on docker swarm be useful for your reference, also It uses a docker-compose file as it is more easy way of doing it.