Docker container runs fine on mac, but not on windows - help?

I’m by far no docker expert, so here it goes.

a good while back I created custom docker image on an intel mac. the image has ubuntu, and zurb foundation framework for web development (npm, grunt, etc).

I have several websites where when I need to update a project, or create a new project, I just run a new container from the image, and bind the container to a dropbox folder that contains my web project folder, in this example ‘xyzadventures’

example:
docker run --rm -it --name abcxyz -v ~/Dropbox/'Mac Mini'/Documents/webprojects/abcxyz/:/public_html jwllc/foundation663u:foundation_663_pwire bash

once it’s fire up, I can go it my projects folder and run zurb foundations watch, it triggers things like gulp, which watches for scss changes, and re-compiles to css. This works great and still does.

what’s confusing me

  1. if I create a copy of the abcxyz,folder and say name the copy abcxyztemp, and launch it with the above command, on the same mac, of course using the new name, the container loads, however, when I run foundation watch, I get errors like ‘gulp not installed’

  2. second example: this is really what I need to work, I installed WSL2 on Windows 11, and launch the container with the dropbox path adjustment:

docker run --rm -it --name abcxyz -v /mnt/c/Users/john/Dropbox/'Mac Mini'/Documents/webprojects/abcxyz/:/public_html jwllc/foundation663u:foundation_663_pwire bash

the container fires up, and again, when run foundation watch, I get errors like ‘gulp not installed’

my question

I thought that when you created an image to include things like npm, gulp, node, that it was all packaged and contained inside the image. whereas if I create a custom image, push it from my mac to a repository, pull the image on a windows computer and run the container all the dependency’s would be intact.

In short, I can fire up a container from my image on my intel mac, but if I pull that image, start the container with the same parameters, nodejs, gulp, all kinds of errors throw.

I’ve seen articles like using docker save / export, etc. but, I thought all dependencies were saved in the images where I can simply pull it from a repository, create an container, and it just work on an computer I may be using (same architecture amd64 of course).

I’d really appreciate it if someone could help. I’ve been chasing this problem for a couple days now.

which you should quickly forget about. Use a Dockerfile to describe your image so you can reproduce it and update anytime. Now you have an image with a history like this:

bash
bash
bash
bash
bash
bash
/bin/sh -c #(nop)  CMD ["/bin/bash"]
/bin/sh -c mkdir -p /run/systemd && echo 'docker' > /run/systemd/container
/bin/sh -c [ -z "$(apt-get indextargets)" ]
/bin/sh -c set -xe   && echo '#!/bin/sh' > /usr/sbin/policy-rc.d  && echo 'exit 101' >> /usr/sbin/policy-rc.d  && chmod +x /usr/sbin/policy-rc.d   && dpkg-divert --local --rename --add /sbin/initctl  && cp -a /usr/sbin/policy-rc.d /sbin/initctl  && sed -i 's/^exit.*/exit 0/' /sbin/initctl   && echo 'force-unsafe-io' > /etc/dpkg/dpkg.cfg.d/docker-apt-speedup   && echo 'DPkg::Post-Invoke { "rm -f /var/cache/apt/archives/*.deb /var/cache/apt/archives/partial/*.deb /var/cache/apt/*.bin || true"; };' > /etc/apt/apt.conf.d/docker-clean  && echo 'APT::Update::Post-Invoke { "rm -f /var/cache/apt/archives/*.deb /var/cache/apt/archives/partial/*.deb /var/cache/apt/*.bin || true"; };' >> /etc/apt/apt.conf.d/docker-clean  && echo 'Dir::Cache::pkgcache ""; Dir::Cache::srcpkgcache "";' >> /etc/apt/apt.conf.d/docker-clean   && echo 'Acquire::Languages "none";' > /etc/apt/apt.conf.d/docker-no-languages   && echo 'Acquire::GzipIndexes "true"; Acquire::CompressionTypes::Order:: "gz";' > /etc/apt/apt.conf.d/docker-gzip-indexes   && echo 'Apt::AutoRemove::SuggestsImportant "false";' > /etc/apt/apt.conf.d/docker-autoremove-suggests
/bin/sh -c #(nop) ADD file:4974bb5483c392fb54a35f3799802d623d14632747493dce5feb4d435634b4ac in

no way to tell what and how was installed in the image. It still contains gulp, but it is installed as “foundation” so the npm folder is in the home of that user and the path of the folder of the gulp command is not in $PATH

so the gulp command won’t work that way. You shared only a docker command which doesn’t show how you tried to use gulp so I’m afraid that’s all I can say. What you installed in the image will be in the image regardless of where you run the container from it. The only thing can affect how it works, how and what folders you mount, what environment variables you set, how you run the commands in the container and sometimes the Linux kernel, which is not likely in your case.

Hmm, let me simplify.

If I pull an image which i created on my intel mac, start a container on that mac and enter a bash shell, and just type gulp. No problem.

If i pull that same image on a different computer, start a container and enter the bash shell, and just type gulp, then gulp isn’t found.

It’s like docker maybe stored some configuration outside of the image file onto the computer where I originally built the image. keep in mind, outside of the container, I don’t have gulp installed at all on the mac.

I understood that, but that can’t happen if the you really use the same image on both machines and mount the same fodlers the same way. That’s why I told you that mounts end environment variables can affect how the container works. You could compare the two environments using docker image inspect, and docker container inspect and checking environment variables and files in both containers.

Let’s say you mount a folder to somewhere which is recognized somehow (maybe because of proper privileges) and not on another machine, because the host is different, that could matter.

So you need to find where gulp is in both containers and why it is found in one container and not in other. Then we can try to explain why that could happen.