Why is Docker not persisting PowerShell modules and repositories in my container?

I’m trying to create a standard container for running builds under Jenkins. The container needs to have PowerShell installed, a global PowerShell module installed (Prism), and our internal PowerShell modules feeds configured. My Dockerfile looks like this:

FROM rockylinux:9
RUN curl -sSL -O https://packages.microsoft.com/config/rhel/9/packages-microsoft-prod.rpm
RUN rpm -Uvh packages-microsoft-prod.rpm
RUN rpm --import https://packages.microsoft.com/keys/microsoft.asc
RUN yum update -y
RUN yum install powershell -y
RUN yum install xz -y
COPY PSRepositories.xml /root/.cache/powershell/PowerShellGet/PSRepositories.xml
RUN pwsh -Command Install-Module Prism -Scope AllUsers

However, when I run a build in this container (yes, as root):

  • pwsh is installed and available
  • xz tools are installed and available

but

  • the Prism PowerShell module is not installed nor does the install directory (/usr/local/share/powershell/Modules) even exist
  • the PowerShell repositories from the PSRepositories.xml file aren’t installed nor does that file exist in /root/.cache/powershell/PowerShellGet/PSRepositories.xml.

When I run the container locally, on my Windows 10 computer running the latest version of Docker Desktop, the PowerShell modules and repositories are installed and availble.

The Linux build host on which the container is running is Rocky 9. PowerShell isn’t installed on the host, just in this container. When the build runs, it successfully starts the build in pwsh, but the build fails when it attempts to use the Prism PowerShell module, which is missing.

How do I get Docker to persist these PowerShell-related files in the container? I’m brand new to containers. It feels like Docker is ignoring user-level file changes in /root and /usr/local/share (or maybe a higher-level /usr directory?). I went searching through the Dockerfile docs but can’t find anything that would force Docker to persist the files/directories I want in the container or that says that user-level files are ignored.

I’ve tried using the --no-cache option on my docker build -t MY_TAG . command to force it to rebuild my entire container, then pushing again (docker push MY_TAG), but that didn’t work.

After reading your post multiple times I’m still not sure what you did exactly, but Docker will not “ignore” folders unless you mount one into a container to override the content that came from the image.

If you haven’t solved the issue yet, could you show step by step what commands you ran to see that something was missing? Don’t forget to share the error messages either. If you show how you run the container from the image, someone could recognize what you did wrong.

When the content of an image works on one machine and not on another, that likely means you are not using the same image even if you use the same tag. The output of docker image inspect IMAGENAME could reveal that.

It turns out Docker was running fine. But Jenkins wasn’t pulling down the latest image, even if it got updated. I had to update Jenkins to always pull the image. I was essentially stuck on an old image that didn’t have these layers.

agent {
    docker {
        alwaysPull true
    }
}