I have a small home server, where I run about 8 containers. I don’t use docker-compose, I just run all containers using the
docker run-command. So far so good; I can easily fetch (or tail) logs from inside the containers by doing this:
docker logs -f <container-name>.
But now I want to go one step further: I want to have all the containers logs to be available on the host-system. And I can’t figure out how to do this. For example, take this docker run command::
docker run --name rogier alpine ping 127.0.0.1. The typical output of this container is this:
64 bytes from 127.0.0.1: seq=0 ttl=64 time=0.053 ms
64 bytes from 127.0.0.1: seq=1 ttl=64 time=0.133 ms
64 bytes from 127.0.0.1: seq=2 ttl=64 time=0.132 ms
Ideally, what I want now, is a file on the host (
/var/log/containers/rogier) where all output from inside the container is stored. Is something like this possible?
With 8 containers, why not?
If all your containers are designed well and log to
stdout you could simply redirect this output to logfiles.
Well, I don’t use compose because everything already works fine and -for some reason- I don’t like it . And about redirecting to
stdout; this indeed is possible, but this feels a bit “stupid” to me. Isn’t there a better solution?
I’m reading about logdrivers; now what if I do this:
docker run --rm=true --log-driver=syslog --log-opt syslog-address=tcp://:5000 --name rogier alpine ping 127.0.0.1
Will this do the same? As in: sending all containers logs (which usually go to stdout) to the remote syslog server (running on the host on :5000)?
You could write the logs to a file located in a mounted volume. Make sure those files don’t grow to large. For this, make sure you rotate the files maybe every 100MB. If you have a high rate of log output then this might not be a good solution specially running 8. You should clarify more about the nature of these logs and what you want to do with them.
Acualy, you can set the default logging driver in the daemon.json: