MSIExec within windows docker container

Expected behavior

MSI packages install on container with no issue

Actual behavior

MSIExec can’t run server:
The Windows installer service could not be accessed. This can occur if the Windows Installer is not correctly installed. Contact your support personnel for assistance.

More specifically from MSIexec, which IS found:
MSI (c) (E8:00) [11:41:59:474]: Client-side and UI is none or basic: Running entire install on the server.

2022-08-05T15:42:00.4730505Z MSI (c) (E8:00) [11:41:59:474]: Grabbed execution mutex.

2022-08-05T15:42:00.4731281Z MSI (c) (E8:00) [11:41:59:480]: Failed to connect to server. Error: 0x80070005

2022-08-05T15:42:00.4731623Z

2022-08-05T15:42:00.4732641Z MSI (c) (E8:00) [11:41:59:481]: Note: 1: 2774 2: 0x80070005

2022-08-05T15:42:00.4733140Z 1: 2774 2: 0x80070005

2022-08-05T15:42:00.4734117Z MSI (c) (E8:00) [11:41:59:481]: Failed to connect to server.

2022-08-05T15:42:00.4734571Z MSI (c) (E8:00) [11:41:59:481]: MainEngineThread is returning 1601

Additional Information

  • THIS IS NOT INSTALLING THE MSI AS PART OF A DOCKERFILE
  • I read many places that the nanoserver image does not come with MSIExec, so we can’t use that, but I have tried the windowsservercore, windows server itself as well.
  • I am running this docker container with an agent inside of it in a pool, and using it on a Release pipeline.
  • No variation of my MSIExec calls (launching the MSI directly, msiexec vs msiexec.exe) etc have solved it.
  • Within a YAML build pipeline (not release) I have another pool where I am able to instantiate a container on the fly and THAT is able to install an MSI. I made my dockerfile dependent on that image to test it out, but encounter the same problem. Works in YAML, doesn’t in Classic/Release?

DockerFile relevant pieces:
FROM mcr.microsoft.com/windows/server:ltsc2022

Release Relevant pieces:
Start-Process “msiexec.exe” -ArgumentList $MSIArguments -Wait -NoNewWindow
$MSIArguments = @(
“/i”
(‘“{0}”’ -f “$msiFile”)
“/qn”
“$instParam”
“/L*V”
$logFile
)

Effectively, lots of pages seem to suggest that nano does not support msiexec but that windowsservercore and windows server do, but I am unable to get past this error. Any help would be appreciated.

You are correct that Nano Server does not support MSI and yes, Server Core does. Have you tried to spin up a simple container (base Server Core image) and install your MSI file post-build? Sometimes I found that doing this shows there was an error on the PowerShell command that was better troubleshooted by simply trying to install the MSI on a running instance. Then you can copy the actual command you used to install into your Dockerfile.

The other thing is: I noticed your FROM statement is using LTSC2022. What pipeline are you using for this? Unfortunately, Windows Server 2022 is not supported on many DevOps services, so reverting back to LTSC2019 might be a good idea.

1 Like

Ah, just noticed you did try to instantiate a container and install and that works. Sorry I did not read that first.

To confirm, the container you instantiate is the same version on your FROM statement, right?

1 Like

First let me say thanks for actually responding… doesn’t seem like there’s that much of that around here!

I have noticed windows server 2022 isn’t supported from some things, I’m hoping by going to the latest this means they will be in the future, but as you noticed in your second post, I don’t think that’s the case here. It appears to be some kind of disjoint if i run a yaml build pipeline within azure vs. a classic pipeline. I’m very stumped.

And yes, we do not use hyper-v isolation and I wrote our base images and built them all with the same windows server 2022 machine I’m using. Initially when i had the issue, I went back to the windows base image, not realizing that they did not have a windows server 2022 image (they I guess have chosen to discontinue doing the entire base images) - so i used the latest and as you’d expect I got an OS mismatch (it actually happens on the docker build, the mismatch). So strange!

To clarify: You were able to build the image locally (on your machine) using a Windows Server 2022 host with a 2022 image, right? And that fails on Azure pipelines with a version mismatch error? If that’s the case, then the issue is exactly that - The Azure pipeline uses a Windows Server 2019 host, which is not able to run a newer version for the container image (2022 in your case). My recommendation is to use a Windows Server 2019 image when running on the pipeline.

The other thing to notice is the Windows image. The Windows image was based on the 2019 wave/versions. The newer image is called Server and is based on the 2022 wave/versions. I blogged about this to explain the images here: Nano Server x Server Core x Server - Which base image is the right one for you? - Microsoft Tech Community

2 Likes

We used to use a virtual machine to run our docker agents, which was Windows Server 2019. Being a virtual server itself led to some issues, so ultimately we purchased a real server and put Windows Server 2022 on it.

From there, I knew that we needed to build images referencing only images that were also build on Windows Server 2022 - so i wrote a simple image and agent on that box specifically to help build other docker images. We build our docker images we need via azure pipelines for traceability, but the docker build/tag/push runs on a container created from the image above, and its agent is on the same Windows Server 2022. That image is intended to be used via azure to be the agent that builds all docker images that mean to be executed on that server. If any of the OS’s mismatch, they fail at docker build time.

So - step forward to a couple weeks ago. I want to take our classic release pipelines that use old VMs and ditch the VMs. I write a new dockerfile, and I use Windows Server Core LTSC2022 - compatible with Windows Server 2022 - as the base (FROM) and then add my other commands. Eventually, I get a working docker image. I write a script on the Windows Server 2022 machine to create a container, and an agent inside of it listens for jobs. [I always over explain but for the sake of simplicity - we must do this pattern - we don’t have access to create the container on the fly that YAML build pipelines give you]

When the agent is listening for jobs, I use my agent pool (it is alone in the pool) in a test release pipeline. I’ve got scripts in those release pipelines, one of which invokes calls to MSIEXEC to install some downloaded installers of our software, to stage our testing platform. It is at this time, execution of the release pipeline, executing a powershell script msiexec.exe call, that the error occurs, outputting that Msiexec is apparently found, but the windows installer service can not be.

Hopefully that was clearer, sorry for the inconvenience and my wordiness ;/

Answering that “we don’t know” would not have been a useful answer, would it :slight_smile: Let’s hope @vrapolinario stays with us to help us answer Windows/Microsoft related questions :wink: , because most of us use Linux containers. Welcome here and I hope you will have better experience next time as well.

2 Likes

Thanks for the detailed information on the scenario. I think we can rule out version incompatibility as you pointed out. What is strange is that you can run the MSIEXEC command on a standalone container but not on the release pipeline. In theory, the container should behave exactly the same on any environment - as long as the host and other components (networking, storage, etc.) meet the same conditions.

With that said, it seems to me the issue is not on the container itself. Have you checked if on the release pipeline environment, the container has the same accesses as on the standalone environment?

1 Like

That is a good point. I left out a piece. I took the docker container I know the azure yaml pipeline allowed an MSI to be installed on, and put that in another empty pool, and used that for the release, and that failed, as well. While I suspected it wasn’t my container, it didn’t help me much and I chose to focus on the windows server core maybe doesn’t have MSIexec aspect. I think we’ve eliminated that, and now I think maybe that’s a piece of it. Something about the Release doesn’t have the same access to the container that the yaml pipeline does. I’ll look into this aspect now - thanks for the idea!

1 Like

Please let us know what your findings are - and if you did find/fix the issue.

1 Like

As an initial test to support our theories, i tested 2 scenarios:

  1. I updated my dockerfile to depend on a different base - the base where I know in a build yaml pipeline i am able to install an MSI. I reregistered my agent to use this image instead, and it still failed, as expected. Nothing special about the image that makes MSIs install, as we are thinking.
  2. I updated my dockerfile again to revert to the server base, and temporarily comment out the agent definition. Using the YAML build pipeline above, I loaded this new image instead, to prove that the image in the pipeline can install an MSI - and it can, as expected. More evidence that this has nothing to do with the image.

Now that I’ve done that, I’m all in on finding out what it is about the release pipeline and where it is putting the MSI that makes it fail. It appears the lack of the server being accessible is a red herring. Will reply if I learn anything.

And you, @vrapolinario, are the man!

I realized by accident I copied a script elsewhere that defined a user for my agent:

& docker run -d --user "NT AUTHORITY\NetworkService" --restart unless-stopped

However, whatever user this is, either isn’t a real user, or it’s a user without privileges to do what’s necessary for MSIExec, or at least it’s backing server. When i remove that line so it’s:

& docker run -d --restart unless-stopped

then the container runs as the default admin user (For Linux it’s root, so it’s probably ContainerAdministrator), and MSIEXEC has no issues at all! Thanks so much for talking it through with me !!! Woohoo!

1 Like

I’m glad to know you found and fixed the issue! :wink:

When you use “NT AUTHORITY\NetworkService” you essentially say that you want to use the computer account for authentication. This is helpful in Active Directory environments, since containers can’t be domain joined. That way you can leverage a gMSA to authenticate on behalf of the container. More details here: Create gMSAs for Windows containers | Microsoft Docs

Regards,

I have almost(?) the same issue. The msiexec tool simply returns error code 1603 when running my multi-stage docker file. Also, if I run the container interactively and attempt to run msiexec manually, then it does absolutely nothing. No error messages, no actions…nothing…as if it is a “no op”.

I’m simply trying to get sqlcmd into my multi-stage build…I don’t care if it is with or without the use of msiexec. The relevant part of my dockerfile is this:

FROM mcr.microsoft.com/dotnet/aspnet:6.0-windowsservercore-ltsc2022 AS base
WORKDIR /app
EXPOSE 80
EXPOSE 443
ENV ACCEPT_EULA=Y

RUN curl -L -o MsSqlCmdLnUtils.msi https://go.microsoft.com/fwlink/?linkid=2142257 & \
    takeown /F MsSqlCmdLnUtils.msi & \
    msiexec.exe /i MsSqlCmdLnUtils.msi /quiet /qn /norestart

I have also attempted to run the docker container interactively from an elevated terminal, in case “additional” privileges are implied. Still no luck. What does it take to make msiexec actually do something? For that matter, is there a way to get the sqlcmd.exe utility without requiring an msi installer?

Would this help you?

According to this page you can download a zip and extract sqlcmd.exe from it.

1 Like

My issue was because my agent i was using docker build from was set to run as the user NTAUTHORITY\NetworkService, which doesn’t have the privs to run msiexec. Nothing stands out here as to the cause, but definitely check the user you’re running that as. Also, run that MSI locally from the command line (like on a test VM or something), windows installer is bad. It might be the MSI hanging.

I had the same issue as @barias and this worked perfectly. sqlcmd.exe in a zip, that’s it, no msiexec in Docker workaround needed

This did help! PowerShell apparently already had sqlcmd built into it. HOWEVER, this circumvents the problem rather than solving it leaving me nervous about “the next time” I need msiexec for some other purpose and, once again, it isn’t working properly. So I’d still love to see a resolution specific to msiexec.