Good Day All,
Since enabling the fluentd logging driver, we are noticing log events are truncated in size at 16385 chars. From our container we are logging json formated strings to stdout, then using the fluentd logging driver to forward the event to a fluentd daemon running on the docker host.
I have run a tcpdump on the docker host and observed that the message leaving the container and arriving in fluentd is a valid json formatted log event, but the nested “log” field in the json object is truncated with the rest of the original payload from the container arriving in subsequent log events. This leads me to believe the logging driver is doing the truncating but i havent found anything definitive on this.
I have also checked mtus on the container and container host and dont see any imposition here.
This is really bad for our use case and I am having trouble finding where this size limitation is imposed. 16385 is such a conspicuous number that I feel like were exhausting some buffer between the container and hte logging driver.
I am no go expert, but digging though the following source files for the logging driver, I dont see anything obvious that would be chunking the events into multiple log events
docker-ce/components/engine/daemon/logger/logger.go
docker-ce/components/engine/daemon/logger/fluentd/fluentd.go
Does anyone have any advice or has seen this problem? my next steps are to try out a few different logging drivers and see if this problem manifests in others or if this is specific to fluentd