Wsl2 host folder mount: permissions spontaneously change during session

I’d been using Docker Toolbox (VirtualBox) for years. Finally moving over to Docker Desktop (WSL2), and have been having endless issues with permissions in mounted host directories.

Specifically, I’m trying to run GitHub - cytopia/devilbox: A modern Docker LAMP stack and MEAN stack for local development. It automatically creates vhosts for whatever folders you stick in /shared/httpd. I mount my various project directories into there using relative paths in the docker-compose file, i.e.

  httpd:
    volumes:
      - ../../../Work/Webroot/site1/htdocs:/shared/httpd/site1/htdocs:rw
      - ../../../Personal/site2/htdocs:/shared/httpd/site2/htdocs:rw
      ...etc

This works with Docker Toolbox (provided I share the root drive with the VirtualBox VM). It also works with Docker Engine on Linux (it’s a dual boot machine, with the same NTFS partition mounted to Windows & Linux). But with Docker Desktop WSL2, the permissions of the files as viewed by the container seem to spontaneously change - literally from one minute to the next. As a quick example, here’s a screenshot where I used Docker Desktop’s “exec” tab to navigate to the shared folder from within Devilbox’s php container, and did ls -la twice in a row (with no actions taken in between). You can see that the first time the files have permissions -----, and the second time they have rwxrwxrwx. I literally just did ls -la twice back-to-back:

A similar behavior is observed when I try to access the in a browser - i.e. it will give “403 forbidden” one minute, then another minute it will load that php file, but fail to include some other php file farther down the tree.

I gave more in-depth repro steps on this github issue, but in the interim I was hoping someone might have an idea here, as I’ve put so many hours into messing with this I’m really at my wits end. No matter what I try, what Claude/ChatGPT suggest, what I find on forums…it just never seems to work properly with WSL. Sometimes the sites load perfectly, sometimes they give 403, sometimes half of the files will work and the other half will be permission denied. - despite having changed literally nothing.

It was so simple & reliable on Docker Toolbox & on Linux. How can this be so hard on WSL2? :frowning:

Help would be much appreciated.

I don’t know about Docker Toolbox, but Docker CE doesn’t need a virtual machine and special ways to mount files from a Windows filesystem into a Linux container.

Since you did nothing between the two commands, it seems to be a bug, which I have not experienced before, but I rarely use the desktop on Windows. If it is a bug, hopefully it will be fixed soon, but until then, I can recommend moving your files into a WSL2 distribution, enabling WSL integration for that distribution and mount files from the Linux filesystem of the WSL2 distribution to the Linux containers. During development you can connect to the WSL distribution from Visual Studio Code so it would be almost like storing your data on Windows.

Just to be safe I would like to ask, do you just mount a simple NTFS filesystem or is there anything special about it that you know of? I assume it is not a network filesystem mounted to Windows.

Ok…after many many many hours, I finally just figured it out.

NTFS permissions. Which seems obvious, but evidently giving “Full Access” to “Everyone” is not sufficient - you have to explicitly give “Full Access” to “SYSTEM” as well.

This does still seem pretty obviously like a bug though, as the behavior should not be so random/inconsistent, where the permissions (as seen by the container) randomly switch back & forth, working one minute & not the next. Although it seems like it should work with “EVERYONE” having permission, it also does make sense that SYSTEM would be needed. But not that without SYSTEM, its behavior would be so non-deterministic.

Anyway, at long last, I’m able to get it working :slight_smile:

…Nevermind, I take it back :frowning:

I applied those permissions & it seemed to be working, but some time later, its spontaneously stopped working again. Same as always - one minute it works, the next it doesn’t, the next it does. Like literally: I refresh the browser & it shows “403 forbidden”. I refresh it again a second later & it mysteriously works.

So, responding to your previous reply:

do you just mount a simple NTFS filesystem or is there anything special about it that you know of?

The files are just on the NTFS D partition - nothing special about it (i.e. not a network filesystem or anything).

I can recommend moving your files into a WSL2 distribution

Unfortunately that isn’t really a feasible workaround. That would make the files unavailable when I’m booted to Linux, to my other backup/syncing software, etc. It isn’t only Docker & IDE that needs to have access.

it seems to be a bug, which I have not experienced before, but I rarely use the desktop on Windows. If it is a bug, hopefully it will be fixed soon

Agreed - however, given that mounting host folders is such a common/core/essential functionality, it also seems like there must be a way to get it to work properly as-is - because who knows how long it might take for a bug fix (or for them even to respond, or given its super screwy/non-deterministic behavior, how long it might take the devs to repro). With Docker Desktop so widely used…reliable Windows host volume mounts must be achievable…somehow

Ok, more clarity:

  • Let’s say it’s initially working (i.e. I can access the php vhost in a browser with no errors)
  • Stop/remove the container (docker-compose down)
  • Reboot Windows
  • Start Docker Desktop
  • Start the container docker-compose up -d mysql
  • Now, after the reboot, it won’t work (accessing in a browser shows 403 forbidden, or “file not found”, or PHP errors. The exact error will depend on the nature of your PHP, but the root cause is always the same: the container can’t access the php files).
  • In Windows Explorer, go to the source files & Get Properties on the containing folder. Do nothing else other than get properties.
  • Refresh browser. Now it mysteriously works.

Observations:

  • Recall my original observation/screenshot: the first ls -l showed no permissions, then ls -l immediately after showed rwx. It looks like just “reading from the files” after the container is started makes them accessible, as if reading from them once causes Docker to properly “evaluate” the permissions. However, it’s weird that I have to manually do this - why does my manually reading the files “fix” the permissions but the webhost process itself accessing them does not?
  • This explains why I previously thought that setting NTFS permissions fixed it: when setting permissions, I first accessed the “Properties” dialog. So the permissions were just a red herring, it was going into that dialog that did it.
  • I can repeat the above & its behavior is consistent. i.e every time I stop+remove the container, reboot Windows, then start the container, it’s broken. Then by doing Get Properties after the container is started, it fixes it.
    • Occasionally “Get Properties” doesn’t resolve it on the very first try - just do it once or twice more, that always fixes it.
    • Occasionally just stopping & restarting the container (without rebooting Windows) is sufficient to reintroduce the error, but not always. Fully rebooting Windows reproduces the error 100% of the time.
  • If I Get Properties before starting the container itself, it does NOT fix it. (i.e. reboot → start Docker Desktop → get properties → start the container → 403 Forbidden. Reboot → Start Docker Desktop → Start the container → Get properties → OK).

My only idea is that maybe the disk with the mountd data turns off “thinking” nobody uses it so Docker desktop can’t read the actual files but still knows avout the file structure. When you read a file on the host, the disk could turn on again so Dockes Desktop can see it again. I checked my Windows settings in Power Options and the timeout is set to 20 minutes when the laptop is plugged in and 10 minutes on battery. I assume the “Plugged in” setting would be available for a desktop PC as well. I never use it long enough to se what happens 20 minutes later and when I use the laptop, I also actively use the drives.

Good question, but I guess less time than I would need :slight_smile: as the developers know Docker Desktop much better. But I cans till try some suggestions.

If you have a Pro subscription on Docker Hub, you could try the synchronized file shares feature

The bug would still be a bug, but if it helps somehow, at least you could continue to work.

You could try to disable Resource Saving mode in Docker Desktop

If there is any bug related to Docker Desktop’s resource saving mode instead of the power options on the host, it could help.

If other ideas don’t help, maybe you could try using an external USB drive or a Linux partition to WSL2. If you can mount the ext4 USB drive or partition to the WSL2 distro with WSL integration, you could use it on Linux and also in WSL2 on Windows. I used something like this only once when I had to switch to Windows entirely and instead of copying data, I mounted the original Linux partitions.

I hope I didn’t miss anything in your messages, but if I did, sorry for that, I tried to write quickly.

Thanks for all the ideas.

maybe the disk with the mountd data turns off “thinking” nobody uses it

Can’t be - there’s only one physical device, a 2280 SSD. It’s just partitioned (1 for Windows, 1 for Linux, 1 for Data). But it’s all one device.

at least you could continue to work.

I actually have a good enough workaround now, so I’m unblocked while I wait for a fix. Basically…each time I start DevilBox in Windows after a reboot, I just have to manually use Windows explorer to “Get Properties,” which un-breaks it.

Still incredibly weird…but now that I discovered that, at least I have a way to restore proper functionality while I wait for a real fix (hopefully)…

Thanks again

I haven’t read every post in this topic, but is it possible that one of the involved containers (at least one that has mounts the host path) takes care of fixing file permissions while starting? If such a container runs a cronjob with a task to “repair” the permissions, or loops through the phases of "start → do something → exit ".

There must be a reason why it behaves like this. I would imagine that if this would really be a Docker Desktop problem that we would have more reportings of this if everyone would be affect.

This someone reminded me to a situation where a friend of mine mounted his entire root directory into a container that took care of “fixing” the permissions. He had flacky permissions all the time. When permissions where fixed, a couple of minutes later they were wrong again.

Could be, but all the permissions disappear like when someone executes chmod -rwx FILEPATH and how would reading a file on the host change it back immediately?

It doesn’t make sense that permissions change during file read, and I doubt that it’s the case.

Personally, I would test it with a single container (as in no other containers are running), and mount a different host path to observe the behavior. If Docker Desktop is really responsible it should do the same here, shouldn’t it?

It would be strange, but I wouldn’t be surprised actually. Could be because of the lack of my knowledge about filesystems + Windows + the mount implementation of Docker Desktop. Maybe it is not the reading that helps, but if the the process of trying to read affects it any way and every single time, it would be an unlikely coincident that a container changes the permissions exactly then.

Unless it happens only when something else happens too (even if not a container changing permissions) at the same time and a single container can’t reproduce it. But you are right, this is also how I tested, because I could not run the whole project quickly. So @metal450 this may be worth a try until you wait for a possible fix.

This is probably the last we could recommend here

It doesn’t make sense that permissions change during file read, and I doubt that it’s the case.

Well, I’m not sure if it’s truly “permissions changing,” but what I can tell you is that:

  1. If/when the issue occurs (where the container can’t read the webroot files)
  2. I just go to Windows Explorer & Get Properties of the containing folder,
  3. Now the container can read the files

Reproduced this many, many times - I can tell you definitively that getting properties of the folder from Windows resolves the issue & makes the container able to read the files again.

is it possible that one of the involved containers (at least one that has mounts the host path) takes care of fixing file permissions while starting?

The reason I really don’t think this is the case is because simply stopping & restarting the docker-compose stack isn’t sufficient to introduce the issue. Even stopping & starting Docker Desktop, even completely logging out of Windows & logging back in. I have to completely reboot the machine. I know, it sounds hard to believe. But I’ve spent hours retrying the same tests over & over - given an initial state where the web stack works (aka the containers can read all files), I can stop/start the container, stop/start Docker, or logout & back into Windows at least 10-15 times in a row, & it continues to work. Then I reboot the machine once, start Docker, start the container, & it can no longer read the files. (Which, per above, I can rectify by doing “Get Properties” in Windows Explorer).

mount a different host path to observe the behavior.

I did actually try creating a “fresh” Devilbox setup, and rather than mapping volumes to my actual webroot files, I just unzipped a freshly downloaded Wordpress.org zip directly into its data/www folder. No issue for the first 3 starts/reboots. Then on the 4th reboot, it happened. 5-10 it didn’t. 11 t did.

It’s very strange that with “my” web projects, it happens consistently - the issue returns after every system reboot. But with this “fresh” Devilbox setup, it only happen maybe once out of 4 or 5 tries. It definitely does happen, it’s just far more intermittent. But because I have to actually do a full reboot to test, it’s agonizingly time-consuming - rebooting over & over & over, 5 or 10 times until it happens.

Personally, I would test it with a single container (as in no other containers are running)

The issue is (per the previous paragraph) - it’s far more intermittent in some scenarios than others. For whatever reason, I can get it to happen after 100% of reboots with my actual web stack but only occasionally with another. Because it’s so intermittent, I can definitively say when it DOES happen, but it’s it’s nearly impossible to say if it doesn’t. And testing in those intermittent scenarios is impossibly time-consuming.

It would be strange, but I wouldn’t be surprised actually. Could be because of the lack of my knowledge about filesystems + Windows + the mount implementation of Docker Desktop. Maybe it is not the reading that helps, but if the the process of trying to read affects it any way and every single time, it would be an unlikely coincident that a container changes the permissions exactly then.

My thought was that it could be something like i.e. after a reboot, WSL or some other Windows service hasn’t started. Manually accessing the files causes Windows, or WSL, or whatever, to “re-evaluate” the file permissions. There’s definitely some magic going on under the hood given that WSL is running on top of a file system that doesn’t even support Linux permissions…

I would imagine that if this would really be a Docker Desktop problem that we would have more reportings of this if everyone would be affect.

Agreed. One thing I just thought of, that I probably should’ve mentioned earlier, is that I’m on Winows 24H2. That’s pretty recent. Most users probably aren’t on it yet.

This is probably the last we could recommend here

Thanks for all the replies. Seeing as I’ve already put probably 12 hours into this so far, I’m just going to settle with the workaround I have for now. Because again - as weird as it sounds - when the issue happens, I can remedy it 100% of the time by just going to Windows & doing Get Properties on the files.