Daemon restart removing layers, breaking images, cant start containers

Hello,
I’m using CentOS 7 and my Linux kernel is 3.10.0-1127.el7.x86_64.

The output of “docker version” is:

Client: Docker Engine - Community
 Version:           19.03.14
 API version:       1.40
 Go version:        go1.13.15
 Git commit:        5eb3275d40
 Built:             Tue Dec  1 19:20:42 2020
 OS/Arch:           linux/amd64
 Experimental:      false

Server: Docker Engine - Community
 Engine:
  Version:          19.03.14
  API version:      1.40 (minimum version 1.12)
  Go version:       go1.13.15
  Git commit:       5eb3275d40
  Built:            Tue Dec  1 19:19:17 2020
  OS/Arch:          linux/amd64
  Experimental:     false
 containerd:
  Version:          1.3.9
  GitCommit:        ea765aba0d05254012b0b9e595e995c09186427f
 runc:
  Version:          1.0.0-rc10
  GitCommit:        dc9208a3303feef5b3839f4323d9beb36df0a9dd
 docker-init:
  Version:          0.18.0
  GitCommit:        fec3683

Reproduction:

1.I have some images and containers made from those images.
2. I’m stopping all of my containers.
3. I’m changing “/etc/docker/daemon.json” so that “live-restore” is set to false.
4. Then I’m running “systemctl daemon-reload” and “systemctl restart docker”

Now when I try to start/run any of the containers or if I “docker inspect image-id” I get an error message like:

docker: Error response from daemon: open /var/lib/docker/overlay2/<hash>/committed: no such file or directory.

If I “ls -a /var/lib/docker/overlay2/” the output is:

.  ..  backingFsBlockDev  l

If I set debug to true, live-restore to false then “systemctl stop docker” (rather than restart) I get the same problem and then if I “journalctl -eu docker.service” the log has this section:

Apr 12 15:17:02 server systemd[1]: Stopping Docker Application Container Engine...
Apr 12 15:17:02 server dockerd[1103]: time="2021-04-12T15:17:02.148013764Z" level=info msg="Processing signal 'terminated'"
Apr 12 15:17:02 server dockerd[1103]: time="2021-04-12T15:17:02.148072464Z" level=debug msg="daemon configured with a 15 seconds minimum shutdown timeout"
Apr 12 15:17:02 server dockerd[1103]: time="2021-04-12T15:17:02.148091664Z" level=debug msg="start clean shutdown of all containers with a 15 seconds timeout..."
Apr 12 15:17:02 server dockerd[1103]: time="2021-04-12T15:17:02.148228663Z" level=debug msg="Trying to unmount /var/lib/docker/overlay2"
Apr 12 15:17:02 server dockerd[1103]: time="2021-04-12T15:17:02.152900243Z" level=debug msg="Unmounted /var/lib/docker/overlay2"
Apr 12 15:17:02 server dockerd[1103]: time="2021-04-12T15:17:02.153069742Z" level=debug msg="Unix socket /var/run/docker/libnetwork/60d39fda3d69.sock doesn't exist. cannot accept client connections"
Apr 12 15:17:02 server dockerd[1103]: time="2021-04-12T15:17:02.153105942Z" level=debug msg="Cleaning up old mountid : start."
Apr 12 15:17:02 server dockerd[1103]: time="2021-04-12T15:17:02.153216042Z" level=debug msg="Cleaning up old mountid : done."
Apr 12 15:17:02 server dockerd[1103]: time="2021-04-12T15:17:02.153316441Z" level=debug msg="Clean shutdown succeeded"
Apr 12 15:17:02 server dockerd[1103]: time="2021-04-12T15:17:02.153326741Z" level=info msg="Daemon shutdown complete"
Apr 12 15:17:02 server systemd[1]: Stopped Docker Application Container Engine.

Here it says its “Trying to unmount /var/lib/docker/overlay2”.

Contents of “/etc/docker/daemon.json”:

{
  "debug": true,
  "insecure-registries":[
    "kube-registry-persistent-secure.single-node.svc.cluster.local:5000"
  ],  "live-restore": false,  "bip": "192.168.0.1/16", "log-driver": "json-file",
  "log-opts":{
    "max-size": "50m",
    "max-file": "7"
  }
}

“docker info”:

Client:
 Debug Mode: false

Server:
 Containers: 4
  Running: 4
  Paused: 0
  Stopped: 0
 Images: 3
 Server Version: 19.03.14
 Storage Driver: overlay2
  Backing Filesystem: xfs
  Supports d_type: true
  Native Overlay Diff: true
 Logging Driver: json-file
 Cgroup Driver: cgroupfs
 Plugins:
  Volume: local
  Network: bridge host ipvlan macvlan null overlay
  Log: awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog
 Swarm: inactive
 Runtimes: runc
 Default Runtime: runc
 Init Binary: docker-init
 containerd version: ea765aba0d05254012b0b9e595e995c09186427f
 runc version: dc9208a3303feef5b3839f4323d9beb36df0a9dd
 init version: fec3683
 Security Options:
  seccomp
   Profile: default
 Kernel Version: 3.10.0-1127.el7.x86_64
 Operating System: CentOS Linux 7 (Core)
 OSType: linux
 Architecture: x86_64
 CPUs: 2
 Total Memory: 5.67GiB
 Name: server
 ID: UXYS:4SW5:7GRT:RQ63:5HTZ:R7UC:FTKX:WDHZ:3LZF:ZGVS:HLNT:QBJM
 Docker Root Dir: /var/lib/docker
 Debug Mode: true
  File Descriptors: 48
  Goroutines: 65
  System Time: 2021-04-12T15:31:34.777152119Z
  EventsListeners: 0
 Registry: https://index.docker.io/v1/
 Labels:
 Experimental: false
 Insecure Registries:
  kube-registry-persistent-secure.single-node.svc.cluster.local:5000
  127.0.0.0/8
 Live Restore Enabled: false

WARNING: bridge-nf-call-iptables is disabled
WARNING: bridge-nf-call-ip6tables is disabled

Additionally, if I restart the VM this problem goes away. The containers are able to be start as expected but this isn’t an acceptable solution for my use case.

My ultimate goal is to be able to re-configure the network bridge IP and to do this I believe I need to set live-restore to false.