I have a container that runs software, to which I don’t have source code, which pretty much listens to streaming data on a port and prints it to stdout.
I have another container that wants to listen to that stdout stream and do some processing on that data. I also want the data to be processed in real-time, i.e. I don’t want to start at the beginning of a log file, I want to process exactly what is being printed at that time. Also at this point I am running this process as a simple java application on the host VM, however it will eventually live in a docker container.
I have been using the ‘docker logs’ command to feed the application, however since there is a ton of data it doesn’t take long for the log file to eat up all my disk space. I tried enabling rolling logs by using the -log-opt (max-size and max-file) options. But then the ‘docker logs -f’ command doesn’t seem to keep outputting the stdout output, but appears to just do a single log file worth of data.
I think I could use the --since flag set to ‘now’ to allow me to filter old data out, giving me my ‘real-time’ data. (Haven’t tried that yet) But the problem of continuous output seems to be my biggest roadblock.
Any ideas how to approach this?