Do Docker container processes share the host scheduling queue with host-processes?

Hi,

I’m working with docker-containers that run various applications. I have the same setup without any containers.

Now I am testing the runtime taken for a single flow to complete. A flow starts from one application [ingresss-container] and ends at another [egress-container]. There may be N intermediary applications [containers].

Strangely, what I observe is that the time taken for the containerized flow is almost the same or sometimes less than that of the non-containerized setup. I expected some sort of overhead upon containerization.

One of the reasons, I am thinking is that because the container (docker) receives its own set of CPU resources and the processes inside containers do not compete for CPU with other host processes.

On the other hand when I run the same pipeline without containerization, then these applications compete with other things (yes there are other processes running in the host) for CPU. Thus, resulting in more time-taken to complete or as equal time as containerized setup.

Is it because (as I think) that the container processes do not get scheduled in the same queue as the host processes and have dedicated CPU time?

Or could this be due to something different in terms of how I am calculating the execution time?

Thank you,
Shabir