Docker Community Forums

Share and learn in the Docker community.

Limit docker logging to avoid costs

Hi,
I am very new to docker, so my question might appear silly.
We are running docker on openshift and consuming container logs through fluentd in splunk.
Things work well, but recently we have started getting some concerns from the Splunk team related to the amount of logs being ingested per day, which has a cost impact on our purchased per day license from Splunk.

Now, I have seen multiple threads on this and the documentation here as well - https://docs.docker.com/config/containers/logging/configure/
which talks about the below config. settings

{
“log-driver”: “json-file”,
“log-opts”: {
“max-size”: “10m”,
“max-file”: “3”,
“labels”: “production_status”,
“env”: “os,customer”
}
}

I have 3 questions on how to bring down unnecessary / low information value logs -
1- If we do reduce the max size and max files, what actually happens? for example the above code is equivalent to 3 rotations for 10mb size, what happens when a container logs more than this, for example if it logs say 1 gb? Does the logging stop after 30MB?
2- How does the trimming happen? will applying max file/size limitations impact removal of any useful information?
3- It is possible to leave docker logs as is and limit the logs ingestion in Splunk. Can someone give some pointers on strings / patterns , which I can search in the available Splunk logs and then suggest the Splunk team NOT to ingest those feeds from the container logging? Primarily, we are interested in the application level errors/warnings/ uids etc and not so much on the os/network logs

Once again, I apologize if the post seems very trival, I am very new to docker and am hazy on this.

These answers are based on assumptions, I don’t definitively know

  1. It should act just like logrotate. This only affects what you get when you check "docker logs
  2. If you run out of the max size and files, I imagine it will just wipe out the oldest file and start filling that up.
  3. I’m guessing you have a splunk forwarder. There are two things to consider here. First, in the container is functioning properly, there should be almost no output to docker logs. You should probably fix the issues that are spamming the logs. Second thing is, what are you forwarding. Most likely docker logs isn’t what you want to forward. You probably want the logs of whatever app you have containerized. One way to do this is to create a volume mount, and then point the splunk forwarder at the mount.

Hi dcaldarola
Thanks for the reply, so if this works like a normal logrotate then I won’t be saving any costs, the forwarder will send the logs on an almost real time basis to splunk.
You are correct about the application logs, it is not an issue of spamming the openshift hub is big in size and we are getting logs like this(white texted for confidentiality purposes),


if you see the message field here , there is a message but it really means nothing. We would like to stop those kind of logging