Concurrently run multiple containers

I am running Docker containers using the --cpuset-cpus option to limit CPU core usage. When I specify a range of CPU cores (e.g., --cpuset-cpus=“0-9”), only a limited number of containers run concurrently. However, when I restrict it to a single CPU core (e.g., --cpuset-cpus=“0”), more containers can run concurrently.

Question:
Why does specifying a range of CPU cores limit the number of concurrently running containers, and how can I configure Docker to use multiple CPU cores (e.g., 0-9) while allowing for more containers to run concurrently without getting blocked? How should I set up the system to run 100 containers concurrently on cores 0-9?

command used: docker run --cpuset-cpus=“0-9”

cpu limit is just for limiting the amount of CPUs a container can use not the number of containers. Do you have an error message when you run more conatiners? Why do you think you can’t run more?

Hi, thank you so much for the quick reply.

I don’t encounter any error messages when I run more containers; all the containers can start and finish correctly. However, my issue is that they don’t run concurrently as expected. After launching all 100 containers, I can see through docker ps or pidof *** that only 5-8 containers are actually running at any given time, while the others haven’t started yet. Additionally, it seems that the containers start in batches—for example, 8 containers will start initially, and the system appears to wait for them to finish before starting the next batch.

My goal is to observe how Docker behaves when running 100 containers concurrently, ensuring that the CPU usage stays at 100%. I want the system to consistently maintain 100 running containers throughout my experiment.

You could enable debug logs in the docker daemon and check what it logs. It would show API calls. When it is waiting, you could check if Docker says something about it. It could also related to control groups and how the OS schedules processes. Did you choose the same CPUs for all containers?

II didn’t have time to test it just bookmarked your post, but I’m curious about this too

I have enabled debug logs, but I’m having trouble interpreting them. I compared the logs generated when running with one core versus ten cores. To simplify, I’ll use the first seven lines from one execution as a reference:

level=debug msg="Calling HEAD /_ping"
level=debug msg="Calling POST /v1.46/containers/create"
level=debug msg="form data: {\"AttachStderr\":true,\"At...
level=debug msg="container mounted via layerStore:  /var/lib/docker/overlay2/6a701615d45800028107e3a0137e12e12327e2ca075f912d7b5b2b0850b792c9/merged" container=d810adef1fca29a05ec4ac77f633f0a6651be5f30a79e39cbad5693fd4931057
level=debug msg="Calling POST /v1.46/containers/d810ade...
level=debug msg="attach: stdout: begin"
level=debug msg="attach: stderr: begin"

What I noticed is that the first three lines appear together in both cases. However, the next line (level=debug msg=“container mounted via layerStore: /var…”) is printed in batches of 3-8 when using cores 0-9, while with just core 0, all 100 instances of this line are printed together. Also, this sentence is the first sentence that contains the docker ID. After checking the source code, I found that this log is generated within the mount function.

Try to change the number of cpus and see if the batch sizes change.