Docker is not using all CPUs


I tried to switch from using hyperv to process isolation for our CI/CD with windows containers. But noticed that when using process isolation not all CPUs are being used by docker, and this slows down our pipeline. Maybe someone has encountered similar problem ?

Here’s how I did the test. I downloaded cpuburn utility for windows and then ran this command

docker run --rm -it --isolation=process -v C:\cpuburn:C:\cpuburn Microsoft Artifact Registry

Then from command line I simply ran cpuburn exe


As you can see in the picture, CPU burn correctly recognizes 24 core cpu. For very short time it uses all 24 cpus, but then scales back down to 8 cores. Does anybody have any ideas why this is happening?
I tried passing extra argument --cpus=24 but still get same result.
If I use hyperv then it correctly uses all cpus.

I can’t really talk about Windows, but I wouldn’t be surprised if it were a result of some kind of optimization of Windows. Process isolation should mean that there is no virtual machine, so the processes can run as they would without containers, except they can’t see other processes. If you run the same commands without containers and see a different behaviour, that means I am wrong and it indeed related to Docker, but I have no idea why it would happen. If docker limits the usable cpu resources, it limits it right from the start.

If you need a better answer, let’s hope someone more experienced in Windows finds the question.