i am currently evaluating the new Docker Desktop Feature to run LLM Models with Docker. Is there any way to diagnose / monitor the execution of an LLM? Currently, after calling a LLM, there quite some time passing (due to my pc) until some output is coming again. It would be great to have some means of probing the current state the LLM is in/ what is doing, e.g. Tools Calls, MCP Actions,…
Hello rimelek,
thank you very much for your reply!
I have already look at the logs-section of the models in docker desktop. Actually, I am looking for more detailed updates / more insights to the interactions currently going on.
Thank you and best regards,
Uli
I’m not aware of more detailed logs. There is a --debug flag of docker model run, but I didn’t notice the difference.
If you know anything in the OpenAPI reference that would help you, you can enable the “host-side TCP support” in Docker Desktop’s settings on the “Beta features” tab, but I’m sure you know that as it is also in the menitoned documentation.
If you have a specific feature you would like to see in the model runner, you can ask for it in the Roadmap