Hello,
I’m trying to deploy a YOLOv11n model on a Jetson Nano. I’ve tested the deployment both locally (without Docker) and in Docker, but I’ve noticed a significant performance drop in Docker. The inference time is 70ms locally, but it increases to 150ms in Docker.
I’m mounting a directory to the Docker container to save code and results, and I’m running the container with the following options to utilize the GPU:
--runtime=nvidia --shm-size=2g --ipc=host
Has anyone experienced similar performance issues with Docker on Jetson devices? Is this a general limitation of Docker for GPU-based applications, or is it something specific to my setup?
Any insights or advice would be greatly appreciated!