Container's volumes dissappear after some time

I have a strange issue where a containers mounted volumes just seem to disappear. Some times this occurs after a few hours, sometimes it takes weeks. Even with daemon debugging turned on there is no messages related to the event when it happens. The only way I can detect it is with docker inspect on the container. When the container is operating properly, it looks like this:

        "Mounts": [
            {
                "Type": "volume",
                "Name": "faasrr01a.par06_CONFIG",
                "Source": "/var/lib/docker/volumes/faasrr01a.par06_CONFIG/_data",
                "Destination": "/config",
                "Driver": "local",
                "Mode": "z",
                "RW": true,
                "Propagation": ""
            },
            {
                "Type": "volume",
                "Name": "faasrr01a.par06_VAR_LOG",
                "Source": "/var/lib/docker/volumes/faasrr01a.par06_VAR_LOG/_data",
                "Destination": "/var/log",
                "Driver": "local",
                "Mode": "z",
                "RW": true,
                "Propagation": ""
            }
        ],

Then suddenly it will disappear as such:

        "Mounts": [],

At this point inside the running container the directory is still usable as normal but the mounted directory on the host is no longer updated. This causes the data to be lost once the container is restarted.

docker volume inspect looks ok even when it is in the bad state.

[
    {
        "CreatedAt": "2023-06-19T15:21:48-05:00",
        "Driver": "local",
        "Labels": null,
        "Mountpoint": "/var/lib/docker/volumes/faasrr01a.par06_CONFIG/_data",
        "Name": "faasrr01a.par06_CONFIG",
        "Options": null,
        "Scope": "local"
    }
]

If anyone has seen this or has any further ideas on how to debug the issue it would be appreciated. Daemon info below.

Client: Docker Engine - Community
 Version:    24.0.1
 Context:    default
 Debug Mode: false
 Plugins:
  buildx: Docker Buildx (Docker Inc.)
    Version:  v0.10.4
    Path:     /usr/libexec/docker/cli-plugins/docker-buildx
  compose: Docker Compose (Docker Inc.)
    Version:  v2.18.1
    Path:     /usr/libexec/docker/cli-plugins/docker-compose

Server:
 Containers: 1
  Running: 1
  Paused: 0
  Stopped: 0
 Images: 1
 Server Version: 24.0.1
 Storage Driver: overlay2
  Backing Filesystem: xfs
  Supports d_type: true
  Using metacopy: false
  Native Overlay Diff: true
  userxattr: false
 Logging Driver: json-file
 Cgroup Driver: cgroupfs
 Cgroup Version: 1
 Plugins:
  Volume: local
  Network: bridge host ipvlan macvlan null overlay
  Log: awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog
 Swarm: inactive
 Runtimes: io.containerd.runc.v2 runc
 Default Runtime: runc
 Init Binary: docker-init
 containerd version: 3dce8eb055cbb6872793272b4f20ed16117344f8
 runc version: v1.1.8-0-g82f18fe
 init version: de40ad0
 Security Options:
  seccomp
   Profile: builtin
 Kernel Version: 4.18.0-477.10.1.el8_8.x86_64
 Operating System: Red Hat Enterprise Linux 8.8 (Ootpa)
 OSType: linux
 Architecture: x86_64
 CPUs: 8
 Total Memory: 377.6GiB
 Name: ng-vrrpar0601
 ID: 39b6ecb8-6356-49c9-a167-c0b4b9e6c87d
 Docker Root Dir: /var/lib/docker
 Debug Mode: true
  File Descriptors: 32
  Goroutines: 47
  System Time: 2023-11-14T10:58:33.33106763-06:00
  EventsListeners: 0
 Experimental: false
 Insecure Registries:
  127.0.0.0/8
 Live Restore Enabled: false

Please, edit your post and format the code and terminal output as described here:

Volumes don’t disappear. It is either a huge bug we haven’t heard about yet, or something recreates containers without the volume mount. I think I have a vague memory about an issue where something similar happened. If I remember correctly, in that case there was another command line tool that sometimes added volumes, sometimes not, but I don’t remember the details.

How did you create those containers? Are those volumes special in any way?

It’s the only container running on the host. I have around 40 hosts running this. Most are rhel8 with a few that are centos7. All of them exhibit the same problem. The container is launched from a shell script that is controlled by systemd. The exact command within the script is:

docker run --rm --detach --name faasrr01a.par06 -h faasrr01a.par06 --privileged --cpuset-cpus 24-35 --network=none --cpuset-mems 1 -v faasrr01a.par06_VAR_LOG:/var/log -v faasrr01a.par06_CONFIG:/config  -it crpd:22.4R1-S1.1

I don’t use Centos or RHEL currently, but I can’t imagine how a running container could lose its mounts when Docker is installed from the official repository mentioned in the documentation. I think this issue would require more investigation. If it is somehow possible to unmount these folders while the continer is running, I would assume sone external services or any security tool that is somehow able to change these mounts not through Docker, but directly . But then I don’t think docker inspect would show any change as it basically reads a Json file which is already saved.

Are you sure the containers were not recreated? What do you see in the “created” column of the output of docker ps?

Positive it has not restarted. They have been running for months. Also if it had restarted while after this issue occurred the container would have lost its configuration on the restart and I would have immediately been alerted.

Also I have run through quite a few things trying to trigger it including scripts to force a write to the volumes directory from inside the container but thus far have been unsuccessful. Nothing of what I would consider “normal operating” tasks that the container performs triggers it. And literally no logging even with debug turned on when it does happen.

Just to make sure we are talking about the same thing. Restart and recreate is not the same. I asked about recreating. Of course that would also mean a restart, but a restart only wouldn’t change anything in the container. The container filesystem would be deleted only when you delete the container. Restart is just a restarting a process inside the container.

Regarding the issue, I don’t know what to say. I really have no idea how it could happen, because it shouldn’t be possible. If you are sure that nothing could change anything and only the Docker daemon itself could cause the issue, you can try to search for similar issues in the moby repository

Or open a new issue. But first, you should upgrade your Docker to the latest patch of v24 which is 24.0.7 in case there was a bug which was already fixed.