Log retrieval problem with local log driver - how to troubleshoot?

I’ve got an application running under Docker (more info below) on an aarch64 Ubuntu 20.04 system, and recently some issues with the logs have come up.

I can retrieve logs from the container start (currently a few days ago, 2024-08-29) up until 2024-08-31 using docker logs -t container-id, but at a point on that date, this happens:

2024-08-31T11:28:56.390605163-07:00 DEBUG:root:PIN states: 
2024-08-31T11:28:56.390655787-07:00  Input PIN                                 Value
Error grabbing logs: error unmarshalling log entry (size=5386): proto: LogEntry: illegal tag 0 (wire type 6)

This container is still running, and I can obtain recent logs using the -n <lines> flag:

# docker logs -n 100000 -t 26159da5d173c63ec670f6e2ccf3b0ab95e7f6ac5942ab405797d9d8fdb27a60 |head 
2024-09-03T04:24:26.567111463-07:00 DEBUG:urllib3.connectionpool:https://api.olyns.com:443 "POST /v2/api/collector/welcome HTTP/1.1" 200 257

…but trying to retrieve using the --since and --until flags (which usually works) gives me a similar error:

# docker logs --since 5m -t 26159da5d173c63ec670f6e2ccf3b0ab95e7f6ac5942ab405797d9d8fdb27a60 
Error grabbing logs: log message is too large (108229732 > 1000000)

In the container directory (/var/lib/docker/containers/<id>/local-logs) there are files:

total 53040
-rw-r----- 1 root root 21403176 Sep  3 13:40 container.log
-rw-r----- 1 root root 10987441 Sep  2 17:46 container.log.1.gz
-rw-r----- 1 root root 10828104 Sep  1 19:01 container.log.2.gz
-rw-r----- 1 root root 11081548 Aug 31 23:08 container.log.3.gz

… and they seem to have data, but they’re in a format that has timestamps in binary (so, recoverable, but not convenient).

Info about this system:

$ uname -a
Linux 27A00024 5.10.120-rt70-tegra #1 SMP PREEMPT RT Tue Sep 19 19:50:25 UTC 2023 aarch64 aarch64 aarch64 GNU/Linux

there is plenty of disk space (24GB avail on /, which is where /var is), and the Docker version is:
Docker version 24.0.7, build 24.0.7-0ubuntu2~20.04.1

In docker-compose.yml, this container is set up with the local logging driver:

    logging:
      driver: "local"
      options:
        max-size: 100m
        max-file: "5"

…Any ideas?

So it looks like the Docker tools can’t decode something in the logs… there’s definitely a “bad spot”, and the tools can decode up to it and after it.

Now, I have a local-driver format log file with the bad spot - are there any forensic tools for decoding it? I need to find a particular timestamp, which is going to be hard unless I can decode the format.