How can I force Docker Desktop (Windows containers) to use all my CPUs globally?

I have a stock standard docker desktop set up an Azure VM

It is being used as a build server, I noticed the build process takes 10 times as long as my local PC and after discussing with the developer it was determined that the CPU limits imposed by docker are the cause.

When I run the command wmic cpu get NumberOfLogicalProcessors I get a value of 2, but the VM has 4 processors

I have seen some people suggest I can run the command docker run --cpu-count 4 imagename

But because this is VM scales the number of CPUs based on usage, and the commands are triggered by an automated build system this isn’t feasible

Is there a way I configured docker to just use the maximum number of CPUs at all times?

Docker Desktop is a developer tool, intended for developer desktop PCs. It uses a VM to run most containers, as Docker was developed as a Linux tool.

When running a container in DD on Win, you got two possible constraints: the container itself can be resource limited (not by default) and the VM (by default).

Usually Docker Desktop on Windows uses WSL2, check if you find some DD settings for resources or similar to change VM settings.

While this being true, of course Windows containers do not depend on WSL2.

The output of docker info should tell us how many cpus are detected.

With Linux containers, there is no resource constraint, unless a container is specifically created with cpu/ram constraints. I can only assume that Windows containers behave the same way. Can we borrow your experience here @vrapolinario? :

edit: I totally misunderstood the topic, so I deleted this post to not confuse others

I’m confused, I definitely had to enable Hyper-V to get Docker Desktop working, but I don’t have the advanced options:

Is there another way to increase the limits to full usage of all the system resources? This machine does nothing but build images all day

Here is my docker info

Client:
 Version:    29.2.1
 Context:    desktop-windows
 Debug Mode: false
 Plugins:
  ai: Docker AI Agent - Ask Gordon (Docker Inc.)
    Version:  v1.18.0
    Path:     C:\Program Files\Docker\cli-plugins\docker-ai.exe
  buildx: Docker Buildx (Docker Inc.)
    Version:  v0.31.1-desktop.1
    Path:     C:\Program Files\Docker\cli-plugins\docker-buildx.exe
  compose: Docker Compose (Docker Inc.)
    Version:  v5.0.2
    Path:     C:\Program Files\Docker\cli-plugins\docker-compose.exe
  debug: Get a shell into any image or container (Docker Inc.)
    Version:  0.0.47
    Path:     C:\Program Files\Docker\cli-plugins\docker-debug.exe
  desktop: Docker Desktop commands (Docker Inc.)
    Version:  v0.3.0
    Path:     C:\Program Files\Docker\cli-plugins\docker-desktop.exe
  extension: Manages Docker extensions (Docker Inc.)
    Version:  v0.2.31
    Path:     C:\Program Files\Docker\cli-plugins\docker-extension.exe
  init: Creates Docker-related starter files for your project (Docker Inc.)
    Version:  v1.4.0
    Path:     C:\Program Files\Docker\cli-plugins\docker-init.exe
  mcp: Docker MCP Plugin (Docker Inc.)
    Version:  v0.39.1
    Path:     C:\Program Files\Docker\cli-plugins\docker-mcp.exe
  model: Docker Model Runner (Docker Inc.)
    Version:  v1.0.8
    Path:     C:\Program Files\Docker\cli-plugins\docker-model.exe
  offload: Docker Offload (Docker Inc.)
    Version:  v0.5.45
    Path:     C:\Program Files\Docker\cli-plugins\docker-offload.exe
  pass: Docker Pass Secrets Manager Plugin (beta) (Docker Inc.)
    Version:  v0.0.24
    Path:     C:\Program Files\Docker\cli-plugins\docker-pass.exe
  sandbox: Docker Sandbox (Docker Inc.)
    Version:  v0.12.0
    Path:     C:\Program Files\Docker\cli-plugins\docker-sandbox.exe
  sbom: View the packaged-based Software Bill Of Materials (SBOM) for an image (Anchore Inc.)
    Version:  0.6.0
    Path:     C:\Program Files\Docker\cli-plugins\docker-sbom.exe
  scout: Docker Scout (Docker Inc.)
    Version:  v1.19.0
    Path:     C:\Program Files\Docker\cli-plugins\docker-scout.exe

Server:
 Containers: 1
  Running: 1
  Paused: 0
  Stopped: 0
 Images: 241
 Server Version: 29.2.1
 Storage Driver: windowsfilter
  Windows:
 Logging Driver: json-file
 Plugins:
  Volume: local
  Network: ics internal l2bridge l2tunnel nat null overlay private transparent
  Log: awslogs etwlogs fluentd gcplogs gelf json-file local splunk syslog
 CDI spec directories:
  /etc/cdi
  /var/run/cdi
 Swarm: inactive
 Default Isolation: hyperv
 Kernel Version: 10.0 26200 (26100.1.amd64fre.ge_release.240331-1435)
 Operating System: Microsoft Windows Version 25H2 (OS Build 26200.7840)
 OSType: windows
 Architecture: x86_64
 CPUs: 4
 Total Memory: 15.95GiB
 Name: buildserver
 ID: 86b490f6-6ee3-4fcd-9e71-70771e2e5c5a
 Docker Root Dir: C:\ProgramData\Docker
 Debug Mode: false
 Labels:
  com.docker.desktop.address=npipe://\\.\pipe\docker_cli
 Experimental: false
 Insecure Registries:
  ::1/128
  127.0.0.0/8
 Live Restore Enabled: false
 Product License: Community Engine

EDIT : In this post I still assumed the topic was about Linux containers, even though the title clearly contains ā€œWindows containersā€, so I deleted this post as well to not confuse others.

I never run Linux containers, so are your WSL2 comments correct?

We are purely Windows developers, we only build Windows images

No. :flushed_face:

I totally missed that. In fact I was sure my comment was right after @bluepuma77 's comment not noticing @meyay … So I’m not sure what I saw. I was possibly lost while switching between topics.

I will reread everything and come back if I can provide actual help :slight_smile:

And I’m here again. I turned on my Windows laptop to test my idea. First of all there are two kinds of isolation. ā€œprocessā€ and ā€œhypervā€. By default the hyperv isolation is used. That part of my earlier but aleady deleted post was right. When you use the hyperv isolation, the default number of CPUs is 2. The small hyperv VM under the container gets two CPUs unless you override it with --cpu-count which you also knew already.

I could not find a parameter that changes the default, but when you use process isolation, you get all CPUs without a virtual machine under the container. What I found is that you can change the default isolation in the docker daemon

https://docs.docker.com/reference/cli/dockerd/#configure-container-isolation-technology-windows

For Windows containers, you can specify the default container isolation technology to use, using the --exec-opt isolation flag.

The following example makes hyperv the default isolation technology:

 dockerd --exec-opt isolation=hyperv

If no isolation value is specified on daemon start, on Windows client, the default is hyperv, and on Windows server, the default is process.

So I thought I could change the daemon config

{
  "exec-opt": ["isolation=process"]
}

but it had no effect, so it is possible that it cannot be changed to use process isolation by default unless you are using a Windows server where you wouldn’t use Docker Desktop as it is not supported even if some users managed to run it for a short time until it didn’t work.

1 Like

Thanks for your research - yes it looks like there is no solution for me - I even asked on Github… Docker Desktop developers should really look at this issue because I am sure it is leading to developers thinking Docker is slow

Anyway, in the end I have done as you suggested, recreated the build server as a Windows server OS and it all works as expected (process isolation, full access to CPU and memory) and it is much faster

@rimelek you are correct on your investigation. The option to change the default isolation would be the ideal solution here. I have tested that myself, but for some reason, Docker Desktop keeps removing the daemon configuration (at least in my tests) and bringing back the default isolation to hyperv.

I’m not sure if there’s an option to change the isolation when you build an image. If there is, that would also solve the problem…

Yes I looked into that early on in the process - we are using Azure Pipelines which unfortunately provides no way to pass through such arguments without us rewriting a large part of the build scripts