Windows docker containers performance

Expected behavior

Faster or at least as fast as execution on a VM

Actual behavior

Twice as slow

Explanation

We use a simple windows container with just sn.exe and an azure agent on a servercore:ltsc2022 image. We have some custom scripts that run install some MSIs and copy some test data, and use VSTest to run, all hooked up through a release pipeline. You might be asking why a release pipeline - currently we do this because we want to execute our release code via the installer for our testing.

The release pipeline stages, after setup, just runs a VSTEST command as you would expect anywhere else. There’s nothing special about the command - just that we run more tests than in our build pipeline test runs - longer execution tests, acceptance tests, etc.

My issue is that I’ve been moving these deployments to the docker pool and using the above docker container, and I’ve noticed that some of the tests - especially the more in depth tests - are taking twice as long. It looks to the casual observer that the entire deployment takes 2x as long - not necessarily just the VSTEST step.

I was under the impression docker was nearly always faster than a VM - it doesn’t seem like it should be slower if i don’t need to run any of the base OS or UI stuff. I’m using no docker run command memory restrictions at all, and my server load is very low, so there should be no restrictions on how much memory this container can use.

Any ideas why it might be slow?

An update for this is we increased the memory on the host server that hosts our agents/docker, from 48GB to 128GB, just in case the memory cap was somehow causing this. It wasn’t - somehow tests running in a docker container are 2x as slow as a VM with an entire OS of windows. :frowning:

Hi - This was a long time ago - did you ever get an answer?

No i didn’t. Right now we’re still using VMs for our deployment installer driven testing :frowning: I hate VMs but I can’t justify twice as long tests for using a technology.