I “reset to factory results” for RC4 and rebuilt my images and ran a script to run my containers. I run 20 or so containers and the last one in the script gave the following message:
+ docker run -d --restart=always --net=wnet -v /Users/kdh/walri.com/pvs/certs:/certs --name proxy-wmail-nextday -h proxy-wmail-nextday -p 3587:587 walr.io/wmail-nextday-proxy
docker: Error response from daemon: Mounts denied: closed.
Removing the container and rerunning worked so problem was only transient. Still, there must be a bug with the intermittent failure to establish mounts inside a new container. The /certs volume had been successfully mounted in 10 or so previous containers.
Here is the Diagnostic ID: FA61D7B9-1E58-49E5-B836-19F0178AD630
After I “reset to factory defaults”, I forgot to bump the memory from the default 2GB to 8GB. It is possible that the VM ran out of memory and caused this error. Normally, the VM uses 1.5GB of memory to execute my containers, but maybe it went over the 2GB limit when starting all my containers (of which, only the last one failed to run).
I’m pretty sure that the error I reported was probably caused by OOM in the VM. After reinstalling and upping the memory size for the VM to 8GB, my build and run scripts show that well over 2GBs is needed to support building/running my containers.
Perhaps the VM needs to handle low memory condition better like maybe issuing a warning to the user somehow that memory was exhausted and the OOM killer kicked in (like a Mac notification or a red/flashing Docker Whale in the docker menu).