How to forward Docker image logs to ELK esily?

I would like to know what is the easiest way to forward my docker container logs into a ELK server, so far the solutions I have tried after having search through internet didn´t work at all.

Basically I have a docker image that I run using docker-compose, this container does not log anything locally( it is composed by different services but none of them are logstash or whatever) but I see logging through docker logs -tf imageName or docker-compose logs. Since I am running up the images with compose I cannot make use ( or at least I don´t know how) of the --logs-driver option of docker.

Thus I was wondering if someone may enlight me a bit regarding how to forward that logging to an ELK container for example that I might download.

Thanks in advance,

REgards

Run a logstash instance somewhere, set up to listen on the ‘gelf’ port and output to ElasticSearch. Set your log driver to gelf and the destination as that instance. Note that ‘docker logs’ will no longer work.

On CentOS 7, when I try to use the gelf log-driver like so:

docker run -d --name turd  --log-driver=gelf --log-opt gelf-address=udp://foo.example.com:12201 --log-opt tag="test" alpine /bin/sh -c "while true; do echo My Message \$RANDOM; sleep 5; done;"

this is written to docker logs:

"logs" command is supported only for "json-file" and "journald" logging drivers (got: gelf)

docker inspect turd returns:

        "HostConfig": {
            "Binds": null,
            "ContainerIDFile": "",
            "LogConfig": {
                "Type": "gelf",
                "Config": {
                    "gelf-address": "udp://foo.example.com:12201",
                    "tag": "test"
                }
            },
```

I have logstash configured to listen:

```
input {
    tcp {
        port => 10500
    }
    gelf {
    }
}

output {
    elasticsearch {
        hosts => ["foo.example.com:9200"]
        user  => 'elastic'
        password => 'changeme'
    }
}
```

`docker info` returns:

```
[root]# docker info
Containers: 12
 Running: 12
 Paused: 0
 Stopped: 0
Images: 34
Server Version: 1.13.0
Storage Driver: overlay
 Backing Filesystem: xfs
 Supports d_type: true
Logging Driver: json-file
Cgroup Driver: cgroupfs
Plugins:
 Volume: local
 Network: bridge host macvlan null overlay
Swarm: active
 NodeID: 8ld3vo1wypixghhzxx7x53fd2
 Is Manager: true
 ClusterID: c74e0jm5efzrpt45tagk9bjmz
 Managers: 1
 Nodes: 4
 Orchestration:
  Task History Retention Limit: 5
 Raft:
  Snapshot Interval: 10000
  Number of Old Snapshots to Retain: 0
  Heartbeat Tick: 1
  Election Tick: 3
 Dispatcher:
  Heartbeat Period: 5 seconds
 CA Configuration:
  Expiry Duration: 3 months
 Node Address: 10.205.45.208
 Manager Addresses:
  10.205.45.208:2377
Runtimes: runc
Default Runtime: runc
Init Binary: docker-init
containerd version: 03e5862ec0d8d3b3f750e19fca3ee367e13c090e
runc version: 2f7393a47307a16f8cee44a37b262e8b81021e3e
init version: 949e6fa
Security Options:
 seccomp
  Profile: default
Kernel Version: 3.10.0-514.2.2.el7.x86_64
Operating System: CentOS Linux 7 (Core)
OSType: linux
Architecture: x86_64
CPUs: 4
Total Memory: 7.64 GiB
Name: cocreate-centos7
ID: KJVX:RIKV:EDJY:PGKQ:I7BR:GYF3:HQCD:X6DF:ULIL:IOJK:XPNL:LD24
Docker Root Dir: /docker/var/lib/docker
Debug Mode (client): false
Debug Mode (server): false
Registry: https://index.docker.io/v1/
Experimental: false
Insecure Registries:
 127.0.0.0/8
Live Restore Enabled: false
```

I'm stuck.  Is this a configuration issue or a CentOS/Docker issue?

@nemonik The logs should be getting sent to gelf. Like @jhmartin said in his answer docker logs won’t work any more since it depends on json-file log driver. Check gelf to see if they’re showing up. If they’re not it’s likely a configuration issue, I recall there being a bug around DNS in gelf driver in recent versions too.

Thanks. I get what you’re saying.