LF Carry ... New Dev Exploring Docker

Hi! New here wanted to get up and running quickly so …

FatDog64-810 (Puppy Linux)

Single mountpoint ‘gotcha’ overcome by edit of /etc/docker/daemon.json:

{
“graph”:"/mnt/sda1/docker"
}

… am actually up and running but with the warnings listed below.

I am so new to this that genuinely, I have not a clue what I’m doing … help!

What I want:

… how do I go about that?

Other info below:


# docker run hello-world
time="2021-06-19T10:29:07.693929214Z" level=info msg="starting signal loop" namespace=moby path=/var/run/docker/containerd/daemon/io.containerd.runtime.v2.task/moby/e673d1f948db9fe16386e49b6bcdb218ff1c2c4ef6d18a991d1103c3c015122e pid=23941
time="2021-06-19T10:29:09.804369197Z" level=error msg="loading cgroup for 23964" error="cgroups: cgroup deleted"

Hello from Docker!
This message shows that your installation appears to be working correctly.

To generate this message, Docker took the following steps:
 1. The Docker client contacted the Docker daemon.
 2. The Docker daemon pulled the "hello-world" image from the Docker Hub.
    (amd64)
 3. The Docker daemon created a new container from that image which runs the
    executable that produces the output you are currently reading.
 4. The Docker daemon streamed that output to the Docker client, which sent it
    to your terminal.

To try something more ambitious, you can run an Ubuntu container with:
 $ docker run -it ubuntu bash

Share images, automate workflows, and more with a free Docker ID:
 https://hub.docker.com/

For more examples and ideas, visit:
 https://docs.docker.com/get-started/

time="2021-06-19T10:29:09.825145727Z" level=error msg="loading cgroup for 23964" error="cgroups: cgroup deleted"
INFO[2021-06-19T10:29:09.915840636Z] ignoring event                                container=e673d1f948db9fe16386e49b6bcdb218ff1c2c4ef6d18a991d1103c3c015122e module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
INFO[2021-06-19T10:29:09.917314162Z] shim disconnected                             id=e673d1f948db9fe16386e49b6bcdb218ff1c2c4ef6d18a991d1103c3c015122e
ERRO[2021-06-19T10:29:09.918348624Z] copy shim log                                 error="read /proc/self/fd/16: file already closed"


# uname -a
Linux hostname 4.19.92 #1 SMP Fri Jan 3 21:58:36 EST 2020 x86_64 AMD Athlon(tm) II X2 215 Processor AuthenticAMD GNU/Linux


#  curl https://raw.githubusercontent.com/docker/docker/master/contrib/check-config.sh > check-config.sh
# bash ./check-config.sh

info: reading kernel config from /proc/config.gz ...

Generally Necessary:
- cgroup hierarchy: single mountpoint! [/sys/fs/cgroup]
    (see https://github.com/tianon/cgroupfs-mount)
- CONFIG_NAMESPACES: enabled
- CONFIG_NET_NS: enabled
- CONFIG_PID_NS: enabled
- CONFIG_IPC_NS: enabled
- CONFIG_UTS_NS: enabled
- CONFIG_CGROUPS: enabled
- CONFIG_CGROUP_CPUACCT: enabled
- CONFIG_CGROUP_DEVICE: enabled
- CONFIG_CGROUP_FREEZER: enabled
- CONFIG_CGROUP_SCHED: enabled
- CONFIG_CPUSETS: enabled
- CONFIG_MEMCG: enabled
- CONFIG_KEYS: enabled
- CONFIG_VETH: enabled (as module)
- CONFIG_BRIDGE: enabled (as module)
- CONFIG_BRIDGE_NETFILTER: enabled (as module)
- CONFIG_IP_NF_FILTER: enabled (as module)
- CONFIG_IP_NF_TARGET_MASQUERADE: enabled (as module)
- CONFIG_NETFILTER_XT_MATCH_ADDRTYPE: enabled (as module)
- CONFIG_NETFILTER_XT_MATCH_CONNTRACK: enabled (as module)
- CONFIG_NETFILTER_XT_MATCH_IPVS: missing
- CONFIG_NETFILTER_XT_MARK: enabled (as module)
- CONFIG_IP_NF_NAT: enabled (as module)
- CONFIG_NF_NAT: enabled (as module)
- CONFIG_POSIX_MQUEUE: enabled
- CONFIG_NF_NAT_IPV4: enabled (as module)
- CONFIG_NF_NAT_NEEDED: enabled

Optional Features:
- CONFIG_USER_NS: enabled
- CONFIG_SECCOMP: enabled
- CONFIG_SECCOMP_FILTER: enabled
- CONFIG_CGROUP_PIDS: enabled
- CONFIG_MEMCG_SWAP: enabled
- CONFIG_MEMCG_SWAP_ENABLED: enabled
- CONFIG_LEGACY_VSYSCALL_EMULATE: enabled
- CONFIG_IOSCHED_CFQ: enabled
- CONFIG_CFQ_GROUP_IOSCHED: enabled
- CONFIG_BLK_CGROUP: enabled
- CONFIG_BLK_DEV_THROTTLING: missing
- CONFIG_CGROUP_PERF: enabled
- CONFIG_CGROUP_HUGETLB: missing
- CONFIG_NET_CLS_CGROUP: enabled (as module)
- CONFIG_CGROUP_NET_PRIO: enabled
- CONFIG_CFS_BANDWIDTH: enabled
- CONFIG_FAIR_GROUP_SCHED: enabled
- CONFIG_RT_GROUP_SCHED: missing
- CONFIG_IP_NF_TARGET_REDIRECT: enabled (as module)
- CONFIG_IP_VS: missing
- CONFIG_IP_VS_NFCT: missing
- CONFIG_IP_VS_PROTO_TCP: missing
- CONFIG_IP_VS_PROTO_UDP: missing
- CONFIG_IP_VS_RR: missing
- CONFIG_SECURITY_SELINUX: missing
- CONFIG_SECURITY_APPARMOR: missing
- CONFIG_EXT4_FS: enabled
- CONFIG_EXT4_FS_POSIX_ACL: missing
- CONFIG_EXT4_FS_SECURITY: enabled
    enable these ext4 configs if you are using ext3 or ext4 as backing filesystem
- Network Drivers:
  - "overlay":
    - CONFIG_VXLAN: enabled (as module)
    - CONFIG_BRIDGE_VLAN_FILTERING: enabled
      Optional (for encrypted networks):
      - CONFIG_CRYPTO: enabled
      - CONFIG_CRYPTO_AEAD: enabled
      - CONFIG_CRYPTO_GCM: enabled
      - CONFIG_CRYPTO_SEQIV: enabled
      - CONFIG_CRYPTO_GHASH: enabled
      - CONFIG_XFRM: enabled
      - CONFIG_XFRM_USER: enabled (as module)
      - CONFIG_XFRM_ALGO: enabled (as module)
      - CONFIG_INET_ESP: enabled (as module)
      - CONFIG_INET_XFRM_MODE_TRANSPORT: enabled
  - "ipvlan":
    - CONFIG_IPVLAN: missing
  - "macvlan":
    - CONFIG_MACVLAN: enabled (as module)
    - CONFIG_DUMMY: enabled (as module)
  - "ftp,tftp client in container":
    - CONFIG_NF_NAT_FTP: enabled (as module)
    - CONFIG_NF_CONNTRACK_FTP: enabled (as module)
    - CONFIG_NF_NAT_TFTP: enabled (as module)
    - CONFIG_NF_CONNTRACK_TFTP: enabled (as module)
- Storage Drivers:
  - "aufs":
    - CONFIG_AUFS_FS: enabled
  - "btrfs":
    - CONFIG_BTRFS_FS: enabled
    - CONFIG_BTRFS_FS_POSIX_ACL: missing
  - "devicemapper":
    - CONFIG_BLK_DEV_DM: enabled
    - CONFIG_DM_THIN_PROVISIONING: enabled (as module)
  - "overlay":
    - CONFIG_OVERLAY_FS: enabled (as module)
  - "zfs":
    - /dev/zfs: missing
    - zfs command: available
    - zpool command: available

Limits:
- /proc/sys/kernel/keys/root_maxkeys: 1000000



# docker info
Client:
 Context:    default
 Debug Mode: false

Server:
 Containers: 1
  Running: 1
  Paused: 0
  Stopped: 0
 Images: 13
 Server Version: 20.10.7
 Storage Driver: overlay2
  Backing Filesystem: extfs
  Supports d_type: true
  Native Overlay Diff: true
  userxattr: false
 Logging Driver: json-file
 Cgroup Driver: cgroupfs
 Cgroup Version: 1
 Plugins:
  Volume: local
  Network: bridge host ipvlan macvlan null overlay
  Log: awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog
 Swarm: inactive
 Runtimes: io.containerd.runc.v2 io.containerd.runtime.v1.linux runc
 Default Runtime: runc
 Init Binary: docker-init
 containerd version: d71fcd7d8303cbf684402823e425e9dd2e99285d
 runc version: b9ee9c6314599f1b4a7f497e1f1f856fe433d3b7
 init version: de40ad0
 Security Options:
  seccomp
   Profile: default
 Kernel Version: 4.19.92
 Operating System: Fatdog64 Linux 810
 OSType: linux
 Architecture: x86_64
 CPUs: 2
 Total Memory: 11.24GiB
 Name: x
 ID: x
 Docker Root Dir: /mnt/sda1/docker
 Debug Mode: false
 Registry: https://index.docker.io/v1/
 Labels:
 Experimental: false
 Insecure Registries:
  127.0.0.0/8
 Live Restore Enabled: false
 Product License: Community Engine

WARNING: No blkio throttle.read_bps_device support
WARNING: No blkio throttle.write_bps_device support
WARNING: No blkio throttle.read_iops_device support
WARNING: No blkio throttle.write_iops_device support
# 


Portainer running …
portainer

Was hoping for a docking great front end (linux here) along these lines…


Example from Jelastic …

but you get my drift … what front ends are you using?

Looking for a really flexible home hosting devops solution with a little external access possibly for me to host friends projects while they are developed.

What should I be looking at?

Containers the hard way: Gocker: A mini Docker written in Go

Anyone tried it?

Worth a read in terms of orientation …
wtf is a docker? Fat guy down at the harbour? etc.

Not good … broken containers on restart!!!

# docker start portainer
time="2021-06-20T11:12:55.703281542Z" level=info msg="starting signal loop" namespace=moby path=/var/run/docker/containerd/daemon/io.containerd.runtime.v2.task/moby/36ade7d264f4d8939f84b199c190b3851aeb8f26b973d677a9748367597bdd0d pid=9004
INFO[2021-06-20T11:12:55.766291648Z] shim disconnected                             id=36ade7d264f4d8939f84b199c190b3851aeb8f26b973d677a9748367597bdd0d
ERRO[2021-06-20T11:12:55.767238832Z] copy shim log                                 error="read /proc/self/fd/12: file already closed"
ERRO[2021-06-20T11:12:55.768177487Z] stream copy error: reading from a closed fifo 
ERRO[2021-06-20T11:12:55.772116643Z] stream copy error: reading from a closed fifo 
ERRO[2021-06-20T11:12:56.454166900Z] 36ade7d264f4d8939f84b199c190b3851aeb8f26b973d677a9748367597bdd0d cleanup: failed to delete container from containerd: no such container 
ERRO[2021-06-20T11:12:56.454303206Z] Handler for POST /v1.41/containers/portainer/start returned error: OCI runtime create failed: container with id exists: 36ade7d264f4d8939f84b199c190b3851aeb8f26b973d677a9748367597bdd0d: unknown 
Error response from daemon: OCI runtime create failed: container with id exists: 36ade7d264f4d8939f84b199c190b3851aeb8f26b973d677a9748367597bdd0d: unknown
Error: failed to start containers: portainer
# INFO[2021-06-20T11:17:35.400799740Z] NetworkDB stats cerberus(fa7a119ddb29) - netID:rtgsx4nwlhauc5h21h5l8bm5v leaving:false netPeers:1 entries:2 Queue qLen:0 netMsg/s:0 

Fix:
stop docker daemon

eg . killall dockerd -9

remove? back up as well? all folders in:
/var/run/docker/runtime-runc/moby
eg … 1235678479fsd498sg4hr9t989

restart docker

dockerd &

done.

Later… same problem again.
This time deleting the runc files fails to fix the problem and they are not recreated.

oops.

ed.
And after another restart … working… again … sort of.

Same behaviour here…

All containers = EXITED after reboot and fail to start giving error:

Container not cleaned up from containerd from previous run container= nnnnn error=“id already in use”
ERRO[2021-06-20T12:31:05.326582435Z] Container not cleaned up from containerd from previous run container=nnnn error=“id already in use”

Tried deleting folders in:
/var/run/docker/runtime-runc/moby
That worked once and docker recreated the folders … but after another restart back to fail now and folder remains empty.
oops.

Is there a way to avoid recreating all containers after each restart?

Is there a way to fix the problem once it has occurred?

Feels like you use this thread as a scratch pad… at leat for me things are not clear enough.

If you use FatDog64, you must either be using the static binary package provided by docker or a redistributed package maintaned by FatDog, Puppy Linux or any other third party. If its a redistrubuted package, you might be better of to raise the ticket where the package actualy is maintained.

Your logs indicate you are using docker v 20.10.7 and that your fs is capable to use the overlay2 storage driver. Can you explain what made it necessary to replace the graph driver?

Use of the ‘graph driver’ (yep no idea what ‘graph’ is!) was required to get past that ^^. Docker wouldnt start without it on my set.

Happy to try anything else …

/etc/docker/docker.json currently contains:

{ "graph":"/mnt/sda1/docker" }

What would you try?

Would be interested in seeing if overlay2 works here so what do I do?

In terms of the stopped containers? what would get them going again?

Yes … am using the precompiled binaries from the docker site.

I understand this as response to my first and third paragraph. Though, the second most likely is the reason you have trouble.

I have no idea what causes the problem or how to fix it, as I allways use ubuntu/debian or centos package from the official docker repo on the matching system. I prefer to stay within the lanes of supported OS’ses.

Well … thanks for your input.

I am interested in other products eg. not docker. Ideas welcome.

You can try podman instead of docker. It actualy is a drop-in replacement.

Thus said, I am not convinced that docker is the problem. It is more likely that the problem is how FatDog works or is configured… You high likely will see the same problem with podman… podman uses crun under the hood, while docker uses containerd, which uses runc under the hood. Though, I am afraid both will require more or less the same kernel features to apply their magic.

Hope you will find a solution. Good luck!

1 Like

Certainly true there. FD is a mini linux and I was happily surprised docker ran at all.

There’s always the option of rebuilding with the missing blobs…

Latest fail…

# docker ps
Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
# dockerd &
[1] 2423
# WARN[2021-06-20T16:42:25.908624619Z] The "graph" config file option is deprecated. Please use "data-root" instead. 
INFO[2021-06-20T16:42:25.919627157Z] Starting up                                  
INFO[2021-06-20T16:42:25.987427070Z] libcontainerd: started new containerd process  pid=2436
INFO[2021-06-20T16:42:25.987609024Z] parsed scheme: "unix"                         module=grpc
INFO[2021-06-20T16:42:25.987653022Z] scheme "unix" not registered, fallback to default scheme  module=grpc
INFO[2021-06-20T16:42:25.987735166Z] ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/containerd/containerd.sock  <nil> 0 <nil>}] <nil> <nil>}  module=grpc
INFO[2021-06-20T16:42:25.987795678Z] ClientConn switching balancer to "pick_first"  module=grpc
INFO[2021-06-20T16:42:26.570075496Z] starting containerd                           revision=d71fcd7d8303cbf684402823e425e9dd2e99285d version=v1.4.6
INFO[2021-06-20T16:42:26.665141537Z] loading plugin "io.containerd.content.v1.content"...  type=io.containerd.content.v1
INFO[2021-06-20T16:42:26.666135781Z] loading plugin "io.containerd.snapshotter.v1.aufs"...  type=io.containerd.snapshotter.v1
INFO[2021-06-20T16:42:26.671108705Z] loading plugin "io.containerd.snapshotter.v1.btrfs"...  type=io.containerd.snapshotter.v1
INFO[2021-06-20T16:42:26.671627984Z] skip loading plugin "io.containerd.snapshotter.v1.btrfs"...  error="path /mnt/sda1/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
INFO[2021-06-20T16:42:26.671697716Z] loading plugin "io.containerd.snapshotter.v1.devmapper"...  type=io.containerd.snapshotter.v1
WARN[2021-06-20T16:42:26.671826715Z] failed to load plugin io.containerd.snapshotter.v1.devmapper  error="devmapper not configured"
INFO[2021-06-20T16:42:26.671861364Z] loading plugin "io.containerd.snapshotter.v1.native"...  type=io.containerd.snapshotter.v1
INFO[2021-06-20T16:42:26.672411346Z] loading plugin "io.containerd.snapshotter.v1.overlayfs"...  type=io.containerd.snapshotter.v1
INFO[2021-06-20T16:42:26.673163497Z] loading plugin "io.containerd.snapshotter.v1.zfs"...  type=io.containerd.snapshotter.v1
INFO[2021-06-20T16:42:26.674359541Z] skip loading plugin "io.containerd.snapshotter.v1.zfs"...  error="path /mnt/sda1/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
INFO[2021-06-20T16:42:26.674456244Z] loading plugin "io.containerd.metadata.v1.bolt"...  type=io.containerd.metadata.v1
WARN[2021-06-20T16:42:26.674660535Z] could not use snapshotter devmapper in metadata plugin  error="devmapper not configured"
INFO[2021-06-20T16:42:26.674707200Z] metadata content store policy set             policy=shared
INFO[2021-06-20T16:42:26.731836514Z] loading plugin "io.containerd.differ.v1.walking"...  type=io.containerd.differ.v1
INFO[2021-06-20T16:42:26.731974908Z] loading plugin "io.containerd.gc.v1.scheduler"...  type=io.containerd.gc.v1
INFO[2021-06-20T16:42:26.732839508Z] loading plugin "io.containerd.service.v1.introspection-service"...  type=io.containerd.service.v1
INFO[2021-06-20T16:42:26.733262908Z] loading plugin "io.containerd.service.v1.containers-service"...  type=io.containerd.service.v1
INFO[2021-06-20T16:42:26.733402540Z] loading plugin "io.containerd.service.v1.content-service"...  type=io.containerd.service.v1
INFO[2021-06-20T16:42:26.733560293Z] loading plugin "io.containerd.service.v1.diff-service"...  type=io.containerd.service.v1
INFO[2021-06-20T16:42:26.733700035Z] loading plugin "io.containerd.service.v1.images-service"...  type=io.containerd.service.v1
INFO[2021-06-20T16:42:26.733899253Z] loading plugin "io.containerd.service.v1.leases-service"...  type=io.containerd.service.v1
INFO[2021-06-20T16:42:26.734028689Z] loading plugin "io.containerd.service.v1.namespaces-service"...  type=io.containerd.service.v1
INFO[2021-06-20T16:42:26.734160268Z] loading plugin "io.containerd.service.v1.snapshots-service"...  type=io.containerd.service.v1
INFO[2021-06-20T16:42:26.734294243Z] loading plugin "io.containerd.runtime.v1.linux"...  type=io.containerd.runtime.v1
INFO[2021-06-20T16:42:26.734902440Z] loading plugin "io.containerd.runtime.v2.task"...  type=io.containerd.runtime.v2
WARN[2021-06-20T16:42:26.738942822Z] cleaning up after shim disconnected           id=2ed1b15e34bf7fcaca7a1677a1d80427818da2f9f87237ca26b65bd518bc03f2 namespace=moby
INFO[2021-06-20T16:42:26.739052419Z] cleaning up dead shim                        
WARN[2021-06-20T16:42:26.988335751Z] grpc: addrConn.createTransport failed to connect to {unix:///var/run/docker/containerd/containerd.sock  <nil> 0 <nil>}. Err :connection error: desc = "transport: error while dialing: dial unix:///var/run/docker/containerd/containerd.sock: timeout". Reconnecting...  module=grpc
WARN[2021-06-20T16:42:27.085114160Z] failed to clean up after shim disconnected    error="io.containerd.runc.v2: open /var/run/docker/containerd/daemon/io.containerd.runtime.v2.task/moby/2ed1b15e34bf7fcaca7a1677a1d80427818da2f9f87237ca26b65bd518bc03f2/runtime: no such file or directory\n: exit status 1" id=2ed1b15e34bf7fcaca7a1677a1d80427818da2f9f87237ca26b65bd518bc03f2 namespace=moby
WARN[2021-06-20T16:42:27.092016958Z] cleaning up after shim disconnected           id=36ade7d264f4d8939f84b199c190b3851aeb8f26b973d677a9748367597bdd0d namespace=moby
INFO[2021-06-20T16:42:27.092075015Z] cleaning up dead shim                        
WARN[2021-06-20T16:42:27.106203703Z] failed to clean up after shim disconnected    error="io.containerd.runc.v2: open /var/run/docker/containerd/daemon/io.containerd.runtime.v2.task/moby/36ade7d264f4d8939f84b199c190b3851aeb8f26b973d677a9748367597bdd0d/runtime: no such file or directory\n: exit status 1" id=36ade7d264f4d8939f84b199c190b3851aeb8f26b973d677a9748367597bdd0d namespace=moby
WARN[2021-06-20T16:42:27.107645968Z] cleaning up after shim disconnected           id=5bb269675d20106de4e841c1a058d0899d0817b6d87923d4a7b91e95d5c4cbb6 namespace=moby
INFO[2021-06-20T16:42:27.107687995Z] cleaning up dead shim                        
WARN[2021-06-20T16:42:27.128733542Z] failed to clean up after shim disconnected    error="io.containerd.runc.v2: open /var/run/docker/containerd/daemon/io.containerd.runtime.v2.task/moby/5bb269675d20106de4e841c1a058d0899d0817b6d87923d4a7b91e95d5c4cbb6/runtime: no such file or directory\n: exit status 1" id=5bb269675d20106de4e841c1a058d0899d0817b6d87923d4a7b91e95d5c4cbb6 namespace=moby
INFO[2021-06-20T16:42:27.280750937Z] loading plugin "io.containerd.monitor.v1.cgroups"...  type=io.containerd.monitor.v1
INFO[2021-06-20T16:42:27.302270105Z] loading plugin "io.containerd.service.v1.tasks-service"...  type=io.containerd.service.v1
INFO[2021-06-20T16:42:27.302476262Z] loading plugin "io.containerd.internal.v1.restart"...  type=io.containerd.internal.v1
INFO[2021-06-20T16:42:27.302651590Z] loading plugin "io.containerd.grpc.v1.containers"...  type=io.containerd.grpc.v1
INFO[2021-06-20T16:42:27.302751941Z] loading plugin "io.containerd.grpc.v1.content"...  type=io.containerd.grpc.v1
INFO[2021-06-20T16:42:27.302845696Z] loading plugin "io.containerd.grpc.v1.diff"...  type=io.containerd.grpc.v1
INFO[2021-06-20T16:42:27.302936138Z] loading plugin "io.containerd.grpc.v1.events"...  type=io.containerd.grpc.v1
INFO[2021-06-20T16:42:27.303026614Z] loading plugin "io.containerd.grpc.v1.healthcheck"...  type=io.containerd.grpc.v1
INFO[2021-06-20T16:42:27.303120603Z] loading plugin "io.containerd.grpc.v1.images"...  type=io.containerd.grpc.v1
INFO[2021-06-20T16:42:27.303211253Z] loading plugin "io.containerd.grpc.v1.leases"...  type=io.containerd.grpc.v1
INFO[2021-06-20T16:42:27.303302087Z] loading plugin "io.containerd.grpc.v1.namespaces"...  type=io.containerd.grpc.v1
INFO[2021-06-20T16:42:27.303612881Z] loading plugin "io.containerd.internal.v1.opt"...  type=io.containerd.internal.v1
INFO[2021-06-20T16:42:27.312880161Z] loading plugin "io.containerd.grpc.v1.snapshots"...  type=io.containerd.grpc.v1
INFO[2021-06-20T16:42:27.312971411Z] loading plugin "io.containerd.grpc.v1.tasks"...  type=io.containerd.grpc.v1
INFO[2021-06-20T16:42:27.313008260Z] loading plugin "io.containerd.grpc.v1.version"...  type=io.containerd.grpc.v1
INFO[2021-06-20T16:42:27.313040270Z] loading plugin "io.containerd.grpc.v1.introspection"...  type=io.containerd.grpc.v1
INFO[2021-06-20T16:42:27.313468510Z] serving...                                    address=/var/run/docker/containerd/containerd-debug.sock
INFO[2021-06-20T16:42:27.313619956Z] serving...                                    address=/var/run/docker/containerd/containerd.sock.ttrpc
INFO[2021-06-20T16:42:27.313730031Z] serving...                                    address=/var/run/docker/containerd/containerd.sock
INFO[2021-06-20T16:42:27.314209828Z] containerd successfully booted in 0.766240s  
INFO[2021-06-20T16:42:28.113472377Z] parsed scheme: "unix"                         module=grpc
INFO[2021-06-20T16:42:28.113607107Z] scheme "unix" not registered, fallback to default scheme  module=grpc
INFO[2021-06-20T16:42:28.113694404Z] ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/containerd/containerd.sock  <nil> 0 <nil>}] <nil> <nil>}  module=grpc
INFO[2021-06-20T16:42:28.113794194Z] ClientConn switching balancer to "pick_first"  module=grpc
INFO[2021-06-20T16:42:28.192141185Z] parsed scheme: "unix"                         module=grpc
INFO[2021-06-20T16:42:28.192213459Z] scheme "unix" not registered, fallback to default scheme  module=grpc
INFO[2021-06-20T16:42:28.192258825Z] ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/containerd/containerd.sock  <nil> 0 <nil>}] <nil> <nil>}  module=grpc
INFO[2021-06-20T16:42:28.192282933Z] ClientConn switching balancer to "pick_first"  module=grpc
INFO[2021-06-20T16:42:29.012598409Z] [graphdriver] using prior storage driver: overlay2 
WARN[2021-06-20T16:42:33.449064648Z] Your kernel does not support CPU realtime scheduler 
WARN[2021-06-20T16:42:33.449143374Z] Your kernel does not support cgroup blkio throttle.read_bps_device 
WARN[2021-06-20T16:42:33.449168511Z] Your kernel does not support cgroup blkio throttle.write_bps_device 
WARN[2021-06-20T16:42:33.449206595Z] Your kernel does not support cgroup blkio throttle.read_iops_device 
WARN[2021-06-20T16:42:33.449237157Z] Your kernel does not support cgroup blkio throttle.write_iops_device 
INFO[2021-06-20T16:42:33.475745019Z] Loading containers: start.                   
INFO[2021-06-20T16:42:37.368705034Z] Removing stale sandbox 4b84805135f29e31f85db4c940f967b7432f1d00a2a642961af797e5e14f8043 (36ade7d264f4d8939f84b199c190b3851aeb8f26b973d677a9748367597bdd0d) 
WARN[2021-06-20T16:42:37.518739935Z] Error (Unable to complete atomic operation, key modified) deleting object [endpoint 94846fa1c7011bd9291e489f82aa4c757ccbb1dc1827a50eb6572ddd1bd02918 d74cc24fa65ff9e5621ccbe7cc98ae0361aa54dd846732adb27bea9dedec5a94], retrying.... 
INFO[2021-06-20T16:42:38.733456997Z] Removing stale sandbox df10c71639c52f14450cabf84315136a5d083ab3ad23a90ddfcc73e1f16063cf (2ed1b15e34bf7fcaca7a1677a1d80427818da2f9f87237ca26b65bd518bc03f2) 
WARN[2021-06-20T16:42:38.867462819Z] Error (Unable to complete atomic operation, key modified) deleting object [endpoint d3887725c046275de76582ad480d39a1b2f213c8e199c10a58545847713a31be 89a89f7f867a7f28a891256c8502bef6cf1bf38d039523732f456554e6078856], retrying.... 
INFO[2021-06-20T16:42:40.098790658Z] Removing stale sandbox e9d60c78dfd7f9144626355e12b1bb2487475eb4ae3022b1ca840a1678bc8c01 (5bb269675d20106de4e841c1a058d0899d0817b6d87923d4a7b91e95d5c4cbb6) 
WARN[2021-06-20T16:42:40.207616281Z] Error (Unable to complete atomic operation, key modified) deleting object [endpoint d3887725c046275de76582ad480d39a1b2f213c8e199c10a58545847713a31be 9f8b49aab30ef96805960de35deac413c0eca441b97ec29f5b3900420609b214], retrying.... 
INFO[2021-06-20T16:42:41.429680593Z] Removing stale sandbox ingress_sbox (ingress-sbox) 
WARN[2021-06-20T16:42:41.514766718Z] Error (Unable to complete atomic operation, key modified) deleting object [endpoint eaa4ebef969e71239bc9bca575f8c7448a9b095c9d2bf24d77dc8d031bb88e28 2ba924a33628b906ce57ba3170d07bed963700e656f4bd59915bc2352f47cd8b], retrying.... 
INFO[2021-06-20T16:42:41.806360273Z] Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address 
ERRO[2021-06-20T16:42:43.283738040Z] stream copy error: reading from a closed fifo 
ERRO[2021-06-20T16:42:43.284143884Z] stream copy error: reading from a closed fifo 
ERRO[2021-06-20T16:42:43.838002052Z] stream copy error: reading from a closed fifo 
ERRO[2021-06-20T16:42:43.838169304Z] stream copy error: reading from a closed fifo 
ERRO[2021-06-20T16:42:44.946015160Z] 5bb269675d20106de4e841c1a058d0899d0817b6d87923d4a7b91e95d5c4cbb6 cleanup: failed to delete container from containerd: no such container 
ERRO[2021-06-20T16:42:44.946100112Z] failed to start container                     container=5bb269675d20106de4e841c1a058d0899d0817b6d87923d4a7b91e95d5c4cbb6 error="mkdir /var/run/docker/containerd/daemon/io.containerd.runtime.v2.task/moby/5bb269675d20106de4e841c1a058d0899d0817b6d87923d4a7b91e95d5c4cbb6: file exists: unknown"
ERRO[2021-06-20T16:42:45.229985629Z] stream copy error: reading from a closed fifo 
ERRO[2021-06-20T16:42:45.231123507Z] stream copy error: reading from a closed fifo 
ERRO[2021-06-20T16:42:45.655789896Z] 36ade7d264f4d8939f84b199c190b3851aeb8f26b973d677a9748367597bdd0d cleanup: failed to delete container from containerd: no such container 
ERRO[2021-06-20T16:42:45.655881256Z] failed to start container                     container=36ade7d264f4d8939f84b199c190b3851aeb8f26b973d677a9748367597bdd0d error="mkdir /var/run/docker/containerd/daemon/io.containerd.runtime.v2.task/moby/36ade7d264f4d8939f84b199c190b3851aeb8f26b973d677a9748367597bdd0d: file exists: unknown"
ERRO[2021-06-20T16:42:45.950999143Z] 2ed1b15e34bf7fcaca7a1677a1d80427818da2f9f87237ca26b65bd518bc03f2 cleanup: failed to delete container from containerd: no such container 
ERRO[2021-06-20T16:42:45.951099499Z] failed to start container                     container=2ed1b15e34bf7fcaca7a1677a1d80427818da2f9f87237ca26b65bd518bc03f2 error="mkdir /var/run/docker/containerd/daemon/io.containerd.runtime.v2.task/moby/2ed1b15e34bf7fcaca7a1677a1d80427818da2f9f87237ca26b65bd518bc03f2: file exists: unknown"
INFO[2021-06-20T16:42:45.951233947Z] Loading containers: done.                    
INFO[2021-06-20T16:42:46.374257948Z] Docker daemon                                 commit=b0f5bc3 graphdriver(s)=overlay2 version=20.10.7
INFO[2021-06-20T16:42:46.928595153Z] parsed scheme: ""                             module=grpc
INFO[2021-06-20T16:42:46.928678324Z] scheme "" not registered, fallback to default scheme  module=grpc
INFO[2021-06-20T16:42:46.928840894Z] ccResolverWrapper: sending update to cc: {[{/var/run/docker/swarm/control.sock  <nil> 0 <nil>}] <nil> <nil>}  module=grpc
INFO[2021-06-20T16:42:46.928867006Z] ClientConn switching balancer to "pick_first"  module=grpc
INFO[2021-06-20T16:42:46.929012350Z] blockingPicker: the picked transport is not ready, loop back to repick  module=grpc
INFO[2021-06-20T16:42:46.934079002Z] parsed scheme: ""                             module=grpc
INFO[2021-06-20T16:42:46.934128779Z] scheme "" not registered, fallback to default scheme  module=grpc
INFO[2021-06-20T16:42:46.934530786Z] ccResolverWrapper: sending update to cc: {[{192.168.18.101:2377  <nil> 0 <nil>}] <nil> <nil>}  module=grpc
INFO[2021-06-20T16:42:46.934557097Z] ClientConn switching balancer to "pick_first"  module=grpc
INFO[2021-06-20T16:42:46.934614193Z] manager selected by agent for new session: {lx3qvt6y2qea1ui3yqt5cmdyh 192.168.18.101:2377}  module=node/agent node.id=lx3qvt6y2qea1ui3yqt5cmdyh
INFO[2021-06-20T16:42:46.952462889Z] waiting 0s before registering session         module=node/agent node.id=lx3qvt6y2qea1ui3yqt5cmdyh
INFO[2021-06-20T16:42:46.968519949Z] Listening for connections                     addr="[::]:2377" module=node node.id=lx3qvt6y2qea1ui3yqt5cmdyh proto=tcp
INFO[2021-06-20T16:42:46.968826521Z] Listening for local connections               addr=/var/run/docker/swarm/control.sock module=node node.id=lx3qvt6y2qea1ui3yqt5cmdyh proto=unix
INFO[2021-06-20T16:42:47.265502755Z] 28689e58c3f59e13 became follower at term 17   module=raft node.id=lx3qvt6y2qea1ui3yqt5cmdyh
INFO[2021-06-20T16:42:47.265591359Z] newRaft 28689e58c3f59e13 [peers: [], term: 17, commit: 1141, applied: 0, lastindex: 1141, lastterm: 17]  module=raft node.id=lx3qvt6y2qea1ui3yqt5cmdyh
INFO[2021-06-20T16:42:47.543946211Z] 28689e58c3f59e13 is starting a new election at term 17  module=raft node.id=lx3qvt6y2qea1ui3yqt5cmdyh
INFO[2021-06-20T16:42:47.544059829Z] 28689e58c3f59e13 became candidate at term 18  module=raft node.id=lx3qvt6y2qea1ui3yqt5cmdyh
INFO[2021-06-20T16:42:47.544171471Z] 28689e58c3f59e13 received MsgVoteResp from 28689e58c3f59e13 at term 18  module=raft node.id=lx3qvt6y2qea1ui3yqt5cmdyh
INFO[2021-06-20T16:42:47.544226876Z] 28689e58c3f59e13 became leader at term 18     module=raft node.id=lx3qvt6y2qea1ui3yqt5cmdyh
INFO[2021-06-20T16:42:47.544246865Z] raft.node: 28689e58c3f59e13 elected leader 28689e58c3f59e13 at term 18  module=raft node.id=lx3qvt6y2qea1ui3yqt5cmdyh
ERRO[2021-06-20T16:42:47.561950811Z] error creating cluster object                 error="name conflicts with an existing object" module=node node.id=lx3qvt6y2qea1ui3yqt5cmdyh
INFO[2021-06-20T16:42:47.562129128Z] leadership changed from not yet part of a raft cluster to lx3qvt6y2qea1ui3yqt5cmdyh  module=node node.id=lx3qvt6y2qea1ui3yqt5cmdyh
INFO[2021-06-20T16:42:47.562208814Z] dispatcher starting                           module=dispatcher node.id=lx3qvt6y2qea1ui3yqt5cmdyh
INFO[2021-06-20T16:42:48.208332884Z] worker lx3qvt6y2qea1ui3yqt5cmdyh was successfully registered  method="(*Dispatcher).register"
INFO[2021-06-20T16:42:48.257013135Z] initialized VXLAN UDP port to 4789           
INFO[2021-06-20T16:42:48.257350821Z] Daemon has completed initialization          
INFO[2021-06-20T16:42:48.257850797Z] Initializing Libnetwork Agent Listen-Addr=0.0.0.0 Local-addr=192.168.18.101 Adv-addr=192.168.18.101 Data-addr= Remote-addr-list=[] MTU=1500 
INFO[2021-06-20T16:42:48.258168017Z] New memberlist node - Node:cerberus will use memberlist nodeID:4a0071a84425 with config:&{NodeID:4a0071a84425 Hostname:cerberus BindAddr:0.0.0.0 AdvertiseAddr:192.168.18.101 BindPort:0 Keys:[[133 149 5 128 180 60 145 233 205 137 110 154 33 56 79 133] [253 204 38 153 220 248 91 233 124 159 129 200 195 177 148 11] [173 108 87 3 41 3 11 163 140 104 168 29 202 220 128 248]] PacketBufferSize:1400 reapEntryInterval:1800000000000 reapNetworkInterval:1825000000000 StatsPrintPeriod:5m0s HealthPrintPeriod:1m0s} 
INFO[2021-06-20T16:42:48.294458515Z] Node 4a0071a84425/192.168.18.101, joined gossip cluster 
INFO[2021-06-20T16:42:48.336769449Z] Node 4a0071a84425/192.168.18.101, added to nodes list 
INFO[2021-06-20T16:42:48.654134590Z] API listen on /var/run/docker.sock           
ERRO[2021-06-20T16:42:49.439665408Z] error reading the kernel parameter net.ipv4.vs.conn_reuse_mode  error="open /proc/sys/net/ipv4/vs/conn_reuse_mode: no such file or directory"
ERRO[2021-06-20T16:42:49.439736694Z] error reading the kernel parameter net.ipv4.vs.expire_nodest_conn  error="open /proc/sys/net/ipv4/vs/expire_nodest_conn: no such file or directory"
ERRO[2021-06-20T16:42:49.439773497Z] error reading the kernel parameter net.ipv4.vs.expire_quiescent_template  error="open /proc/sys/net/ipv4/vs/expire_quiescent_template: no such file or directory"
docker ps
CONTAINER ID   IMAGE     COMMAND   CREATED   STATUS    PORTS     NAMES
# docker ps
CONTAINER ID   IMAGE     COMMAND   CREATED   STATUS    PORTS     NAMES
# docker ps -a
CONTAINER ID   IMAGE                    COMMAND                  CREATED             STATUS                            PORTS                                                                                  NAMES
2ed1b15e34bf   cyclos/cyclos            "catalina.sh run"        About an hour ago   Exited (255) About a minute ago   0.0.0.0:90->8080/tcp, :::90->8080/tcp                                                  cyclos-app
5bb269675d20   postgis/postgis          "docker-entrypoint.s…"   About an hour ago   Exited (255) About a minute ago   5432/tcp                                                                               cyclos-db
36ade7d264f4   portainer/portainer-ce   "/portainer"             17 hours ago        Exited (255) About a minute ago   0.0.0.0:8000->8000/tcp, :::8000->8000/tcp, 0.0.0.0:9000->9000/tcp, :::9000->9000/tcp   portainer
# 

# WARN[2021-06-20T16:42:25.908624619Z] The "graph" config file option is deprecated. Please use "data-root" instead. 

^^ possibly significant.

INFO[2021-06-20T16:42:26.671627984Z] skip loading plugin "io.containerd.snapshotter.v1.btrfs"...  error="path /mnt/sda1/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1

Nasty.

grpc: addrConn.createTransport failed to connect to {unix:///var/run/docker/containerd/containerd.sock  <nil> 0 <nil>}. Err :connection error: desc = "transport: error while dialing: dial unix:///var/run/docker/containerd/containerd.sock: timeout". Reconnecting...  module=grpc

This seems significant.

failed to start container                     container=5bb269675d20106de4e841c1a058d0899d0817b6d87923d4a7b91e95d5c4cbb6 error="mkdir /var/run/docker/containerd/daemon/io.containerd.runtime.v2.task/moby/5bb269675d20106de4e841c1a058d0899d0817b6d87923d4a7b91e95d5c4cbb6: file exists: unknown"

And yep … broken again (after restart).

Removing the folders from these locations got the containers running again…
/var/docker/runtime-runc/moby
/var/run/docker/containerd/daemon/io.containerd.runtime.v2.task/moby/

… but … feel I may have lost some data.

What are these files for? what harm is done by removing them?

{ 
   "data-root":"/mnt/sda1/docker"
}

That seems a little happier. Will try a full restart in a bit.

# INFO[2021-06-20T17:09:12.845356933Z] Starting up                                  
INFO[2021-06-20T17:09:12.847754701Z] libcontainerd: started new containerd process  pid=6995
INFO[2021-06-20T17:09:12.847820760Z] parsed scheme: "unix"                         module=grpc
INFO[2021-06-20T17:09:12.847833632Z] scheme "unix" not registered, fallback to default scheme  module=grpc
INFO[2021-06-20T17:09:12.847881724Z] ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/containerd/containerd.sock  <nil> 0 <nil>}] <nil> <nil>}  module=grpc
INFO[2021-06-20T17:09:12.847897092Z] ClientConn switching balancer to "pick_first"  module=grpc
INFO[2021-06-20T17:09:12.882020578Z] starting containerd                           revision=d71fcd7d8303cbf684402823e425e9dd2e99285d version=v1.4.6
INFO[2021-06-20T17:09:12.929306304Z] loading plugin "io.containerd.content.v1.content"...  type=io.containerd.content.v1
INFO[2021-06-20T17:09:12.929437369Z] loading plugin "io.containerd.snapshotter.v1.aufs"...  type=io.containerd.snapshotter.v1
INFO[2021-06-20T17:09:12.931846091Z] loading plugin "io.containerd.snapshotter.v1.btrfs"...  type=io.containerd.snapshotter.v1
INFO[2021-06-20T17:09:12.932248059Z] skip loading plugin "io.containerd.snapshotter.v1.btrfs"...  error="path /mnt/sda1/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
INFO[2021-06-20T17:09:12.932294457Z] loading plugin "io.containerd.snapshotter.v1.devmapper"...  type=io.containerd.snapshotter.v1
WARN[2021-06-20T17:09:12.932329173Z] failed to load plugin io.containerd.snapshotter.v1.devmapper  error="devmapper not configured"
INFO[2021-06-20T17:09:12.932349333Z] loading plugin "io.containerd.snapshotter.v1.native"...  type=io.containerd.snapshotter.v1
INFO[2021-06-20T17:09:12.932391962Z] loading plugin "io.containerd.snapshotter.v1.overlayfs"...  type=io.containerd.snapshotter.v1
INFO[2021-06-20T17:09:12.932537451Z] loading plugin "io.containerd.snapshotter.v1.zfs"...  type=io.containerd.snapshotter.v1
INFO[2021-06-20T17:09:12.932862259Z] skip loading plugin "io.containerd.snapshotter.v1.zfs"...  error="path /mnt/sda1/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
INFO[2021-06-20T17:09:12.932897582Z] loading plugin "io.containerd.metadata.v1.bolt"...  type=io.containerd.metadata.v1
WARN[2021-06-20T17:09:12.932936411Z] could not use snapshotter devmapper in metadata plugin  error="devmapper not configured"
INFO[2021-06-20T17:09:12.932957334Z] metadata content store policy set             policy=shared
INFO[2021-06-20T17:09:12.933116510Z] loading plugin "io.containerd.differ.v1.walking"...  type=io.containerd.differ.v1
INFO[2021-06-20T17:09:12.933152485Z] loading plugin "io.containerd.gc.v1.scheduler"...  type=io.containerd.gc.v1
INFO[2021-06-20T17:09:12.933211649Z] loading plugin "io.containerd.service.v1.introspection-service"...  type=io.containerd.service.v1
INFO[2021-06-20T17:09:12.933268043Z] loading plugin "io.containerd.service.v1.containers-service"...  type=io.containerd.service.v1
INFO[2021-06-20T17:09:12.933293311Z] loading plugin "io.containerd.service.v1.content-service"...  type=io.containerd.service.v1
INFO[2021-06-20T17:09:12.933315417Z] loading plugin "io.containerd.service.v1.diff-service"...  type=io.containerd.service.v1
INFO[2021-06-20T17:09:12.933339555Z] loading plugin "io.containerd.service.v1.images-service"...  type=io.containerd.service.v1
INFO[2021-06-20T17:09:12.933362748Z] loading plugin "io.containerd.service.v1.leases-service"...  type=io.containerd.service.v1
INFO[2021-06-20T17:09:12.933386526Z] loading plugin "io.containerd.service.v1.namespaces-service"...  type=io.containerd.service.v1
INFO[2021-06-20T17:09:12.933410093Z] loading plugin "io.containerd.service.v1.snapshots-service"...  type=io.containerd.service.v1
INFO[2021-06-20T17:09:12.933434083Z] loading plugin "io.containerd.runtime.v1.linux"...  type=io.containerd.runtime.v1
INFO[2021-06-20T17:09:12.933643383Z] loading plugin "io.containerd.runtime.v2.task"...  type=io.containerd.runtime.v2
ERRO[2021-06-20T17:09:12.947051241Z] loading container old                         error="container \"old\" in namespace \"moby\": not found"
INFO[2021-06-20T17:09:12.948411121Z] loading plugin "io.containerd.monitor.v1.cgroups"...  type=io.containerd.monitor.v1
INFO[2021-06-20T17:09:12.949599359Z] loading plugin "io.containerd.service.v1.tasks-service"...  type=io.containerd.service.v1
INFO[2021-06-20T17:09:12.949755909Z] loading plugin "io.containerd.internal.v1.restart"...  type=io.containerd.internal.v1
INFO[2021-06-20T17:09:12.949845898Z] loading plugin "io.containerd.grpc.v1.containers"...  type=io.containerd.grpc.v1
INFO[2021-06-20T17:09:12.949870723Z] loading plugin "io.containerd.grpc.v1.content"...  type=io.containerd.grpc.v1
INFO[2021-06-20T17:09:12.949926472Z] loading plugin "io.containerd.grpc.v1.diff"...  type=io.containerd.grpc.v1
INFO[2021-06-20T17:09:12.949965678Z] loading plugin "io.containerd.grpc.v1.events"...  type=io.containerd.grpc.v1
INFO[2021-06-20T17:09:12.950004061Z] loading plugin "io.containerd.grpc.v1.healthcheck"...  type=io.containerd.grpc.v1
INFO[2021-06-20T17:09:12.950044254Z] loading plugin "io.containerd.grpc.v1.images"...  type=io.containerd.grpc.v1
INFO[2021-06-20T17:09:12.950083269Z] loading plugin "io.containerd.grpc.v1.leases"...  type=io.containerd.grpc.v1
INFO[2021-06-20T17:09:12.950121283Z] loading plugin "io.containerd.grpc.v1.namespaces"...  type=io.containerd.grpc.v1
INFO[2021-06-20T17:09:12.950160566Z] loading plugin "io.containerd.internal.v1.opt"...  type=io.containerd.internal.v1
INFO[2021-06-20T17:09:12.950287110Z] loading plugin "io.containerd.grpc.v1.snapshots"...  type=io.containerd.grpc.v1
INFO[2021-06-20T17:09:12.950321321Z] loading plugin "io.containerd.grpc.v1.tasks"...  type=io.containerd.grpc.v1
INFO[2021-06-20T17:09:12.950385156Z] loading plugin "io.containerd.grpc.v1.version"...  type=io.containerd.grpc.v1
INFO[2021-06-20T17:09:12.950423548Z] loading plugin "io.containerd.grpc.v1.introspection"...  type=io.containerd.grpc.v1
INFO[2021-06-20T17:09:12.950805546Z] serving...                                    address=/var/run/docker/containerd/containerd-debug.sock
INFO[2021-06-20T17:09:12.951001075Z] serving...                                    address=/var/run/docker/containerd/containerd.sock.ttrpc
INFO[2021-06-20T17:09:12.951115541Z] serving...                                    address=/var/run/docker/containerd/containerd.sock
INFO[2021-06-20T17:09:12.951149953Z] containerd successfully booted in 0.070467s  
INFO[2021-06-20T17:09:12.964984768Z] parsed scheme: "unix"                         module=grpc
INFO[2021-06-20T17:09:12.965045672Z] scheme "unix" not registered, fallback to default scheme  module=grpc
INFO[2021-06-20T17:09:12.965081580Z] ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/containerd/containerd.sock  <nil> 0 <nil>}] <nil> <nil>}  module=grpc
INFO[2021-06-20T17:09:12.965097835Z] ClientConn switching balancer to "pick_first"  module=grpc
INFO[2021-06-20T17:09:12.978230003Z] parsed scheme: "unix"                         module=grpc
INFO[2021-06-20T17:09:12.978891151Z] scheme "unix" not registered, fallback to default scheme  module=grpc
INFO[2021-06-20T17:09:12.979392681Z] ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/containerd/containerd.sock  <nil> 0 <nil>}] <nil> <nil>}  module=grpc
INFO[2021-06-20T17:09:12.979433408Z] ClientConn switching balancer to "pick_first"  module=grpc
INFO[2021-06-20T17:09:13.225118127Z] [graphdriver] using prior storage driver: overlay2 
WARN[2021-06-20T17:09:13.522964391Z] Your kernel does not support CPU realtime scheduler 
WARN[2021-06-20T17:09:13.523046071Z] Your kernel does not support cgroup blkio throttle.read_bps_device 
WARN[2021-06-20T17:09:13.523233430Z] Your kernel does not support cgroup blkio throttle.write_bps_device 
WARN[2021-06-20T17:09:13.523328508Z] Your kernel does not support cgroup blkio throttle.read_iops_device 
WARN[2021-06-20T17:09:13.523464060Z] Your kernel does not support cgroup blkio throttle.write_iops_device 
INFO[2021-06-20T17:09:13.524148279Z] Loading containers: start.                   
INFO[2021-06-20T17:09:14.056817726Z] ignoring event                                container=36ade7d264f4d8939f84b199c190b3851aeb8f26b973d677a9748367597bdd0d module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
INFO[2021-06-20T17:09:14.057328887Z] shim disconnected                             id=36ade7d264f4d8939f84b199c190b3851aeb8f26b973d677a9748367597bdd0d
ERRO[2021-06-20T17:09:14.057472989Z] copy shim log                                 error="read /proc/self/fd/9: file already closed"
INFO[2021-06-20T17:09:14.304248179Z] ignoring event                                container=2ed1b15e34bf7fcaca7a1677a1d80427818da2f9f87237ca26b65bd518bc03f2 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
INFO[2021-06-20T17:09:14.304940363Z] shim disconnected                             id=2ed1b15e34bf7fcaca7a1677a1d80427818da2f9f87237ca26b65bd518bc03f2
ERRO[2021-06-20T17:09:14.305076307Z] copy shim log                                 error="read /proc/self/fd/6: file already closed"
INFO[2021-06-20T17:09:15.161203567Z] ignoring event                                container=5bb269675d20106de4e841c1a058d0899d0817b6d87923d4a7b91e95d5c4cbb6 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
INFO[2021-06-20T17:09:15.162160895Z] shim disconnected                             id=5bb269675d20106de4e841c1a058d0899d0817b6d87923d4a7b91e95d5c4cbb6
ERRO[2021-06-20T17:09:15.162249928Z] copy shim log                                 error="read /proc/self/fd/12: file already closed"
INFO[2021-06-20T17:09:17.132231292Z] Removing stale sandbox ingress_sbox (ingress-sbox) 
WARN[2021-06-20T17:09:17.220781041Z] Error (Unable to complete atomic operation, key modified) deleting object [endpoint eaa4ebef969e71239bc9bca575f8c7448a9b095c9d2bf24d77dc8d031bb88e28 35e85dc64b11bda4010b584b837f77bea0f6abc29f86433accf1d0979e5c2c78], retrying.... 
INFO[2021-06-20T17:09:18.498719424Z] Removing stale sandbox 0c5f4f5264f2a59d495e8be1e5f2497b3a5f5d62c999da360fcc8d5a91920fe4 (5bb269675d20106de4e841c1a058d0899d0817b6d87923d4a7b91e95d5c4cbb6) 
WARN[2021-06-20T17:09:18.611192951Z] Error (Unable to complete atomic operation, key modified) deleting object [endpoint d3887725c046275de76582ad480d39a1b2f213c8e199c10a58545847713a31be 8ba7eda50669dfd961c6806e8e7266ded0813f706e5dca9367d9325a1a3ad0f1], retrying.... 
INFO[2021-06-20T17:09:19.889400900Z] Removing stale sandbox 2750ec67b504bed83521dc1081e64aa7a7abf61747d8a793485de109537650d1 (36ade7d264f4d8939f84b199c190b3851aeb8f26b973d677a9748367597bdd0d) 
WARN[2021-06-20T17:09:20.034725341Z] Error (Unable to complete atomic operation, key modified) deleting object [endpoint 722374d50d264fd40b69026e0e447f20781e9c26550d0f096351eb6b726a8117 e88c3c06d29f897974653826ef25e102bbfc5a4cdd99cc10db935af861cd1412], retrying.... 
INFO[2021-06-20T17:09:21.279079252Z] Removing stale sandbox 9ddf079a60c4dc78777602149e6ab0fd6ce2bb0bc6de9e63d9edd1532917ddd3 (2ed1b15e34bf7fcaca7a1677a1d80427818da2f9f87237ca26b65bd518bc03f2) 
WARN[2021-06-20T17:09:21.374875436Z] Error (Unable to complete atomic operation, key modified) deleting object [endpoint d3887725c046275de76582ad480d39a1b2f213c8e199c10a58545847713a31be 94a0ac30547a7de457ebd3c6a3083b08ebf3329f8115bc91ce98ce39fc3d4d5f], retrying.... 
INFO[2021-06-20T17:09:21.716614638Z] Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address 
time="2021-06-20T17:09:22.440317755Z" level=info msg="starting signal loop" namespace=moby path=/var/run/docker/containerd/daemon/io.containerd.runtime.v2.task/moby/36ade7d264f4d8939f84b199c190b3851aeb8f26b973d677a9748367597bdd0d pid=7418
time="2021-06-20T17:09:24.202589747Z" level=error msg="loading cgroup for 7438" error="cgroups: cgroup deleted"
time="2021-06-20T17:09:24.229066459Z" level=error msg="loading cgroup for 7438" error="cgroups: cgroup deleted"
INFO[2021-06-20T17:09:24.345638699Z] Loading containers: done.                    
INFO[2021-06-20T17:09:24.812241073Z] Docker daemon                                 commit=b0f5bc3 graphdriver(s)=overlay2 version=20.10.7
INFO[2021-06-20T17:09:24.836643938Z] parsed scheme: ""                             module=grpc
INFO[2021-06-20T17:09:24.836758885Z] scheme "" not registered, fallback to default scheme  module=grpc
INFO[2021-06-20T17:09:24.836800069Z] ccResolverWrapper: sending update to cc: {[{/var/run/docker/swarm/control.sock  <nil> 0 <nil>}] <nil> <nil>}  module=grpc
INFO[2021-06-20T17:09:24.836831719Z] ClientConn switching balancer to "pick_first"  module=grpc
INFO[2021-06-20T17:09:24.836964424Z] blockingPicker: the picked transport is not ready, loop back to repick  module=grpc
INFO[2021-06-20T17:09:24.856365926Z] Listening for connections                     addr="[::]:2377" module=node node.id=lx3qvt6y2qea1ui3yqt5cmdyh proto=tcp
INFO[2021-06-20T17:09:24.857911958Z] Listening for local connections               addr=/var/run/docker/swarm/control.sock module=node node.id=lx3qvt6y2qea1ui3yqt5cmdyh proto=unix
INFO[2021-06-20T17:09:24.899469677Z] parsed scheme: ""                             module=grpc
INFO[2021-06-20T17:09:24.899580432Z] scheme "" not registered, fallback to default scheme  module=grpc
INFO[2021-06-20T17:09:24.900386185Z] ccResolverWrapper: sending update to cc: {[{192.168.18.101:2377  <nil> 0 <nil>}] <nil> <nil>}  module=grpc
INFO[2021-06-20T17:09:24.900460057Z] ClientConn switching balancer to "pick_first"  module=grpc
INFO[2021-06-20T17:09:24.900597531Z] manager selected by agent for new session: {lx3qvt6y2qea1ui3yqt5cmdyh 192.168.18.101:2377}  module=node/agent node.id=lx3qvt6y2qea1ui3yqt5cmdyh
INFO[2021-06-20T17:09:24.906494985Z] 28689e58c3f59e13 became follower at term 18   module=raft node.id=lx3qvt6y2qea1ui3yqt5cmdyh
INFO[2021-06-20T17:09:24.906571983Z] newRaft 28689e58c3f59e13 [peers: [], term: 18, commit: 1147, applied: 0, lastindex: 1147, lastterm: 18]  module=raft node.id=lx3qvt6y2qea1ui3yqt5cmdyh
INFO[2021-06-20T17:09:24.922300374Z] waiting 0s before registering session         module=node/agent node.id=lx3qvt6y2qea1ui3yqt5cmdyh
INFO[2021-06-20T17:09:25.311802711Z] 28689e58c3f59e13 is starting a new election at term 18  module=raft node.id=lx3qvt6y2qea1ui3yqt5cmdyh
INFO[2021-06-20T17:09:25.315727426Z] 28689e58c3f59e13 became candidate at term 19  module=raft node.id=lx3qvt6y2qea1ui3yqt5cmdyh
INFO[2021-06-20T17:09:25.315884899Z] 28689e58c3f59e13 received MsgVoteResp from 28689e58c3f59e13 at term 19  module=raft node.id=lx3qvt6y2qea1ui3yqt5cmdyh
INFO[2021-06-20T17:09:25.315943983Z] 28689e58c3f59e13 became leader at term 19     module=raft node.id=lx3qvt6y2qea1ui3yqt5cmdyh
INFO[2021-06-20T17:09:25.315992786Z] raft.node: 28689e58c3f59e13 elected leader 28689e58c3f59e13 at term 19  module=raft node.id=lx3qvt6y2qea1ui3yqt5cmdyh
ERRO[2021-06-20T17:09:25.344421633Z] error creating cluster object                 error="name conflicts with an existing object" module=node node.id=lx3qvt6y2qea1ui3yqt5cmdyh
INFO[2021-06-20T17:09:25.344732506Z] leadership changed from not yet part of a raft cluster to lx3qvt6y2qea1ui3yqt5cmdyh  module=node node.id=lx3qvt6y2qea1ui3yqt5cmdyh
INFO[2021-06-20T17:09:25.344877139Z] dispatcher starting                           module=dispatcher node.id=lx3qvt6y2qea1ui3yqt5cmdyh
INFO[2021-06-20T17:09:25.982213909Z] worker lx3qvt6y2qea1ui3yqt5cmdyh was successfully registered  method="(*Dispatcher).register"
INFO[2021-06-20T17:09:26.016871468Z] Initializing Libnetwork Agent Listen-Addr=0.0.0.0 Local-addr=192.168.18.101 Adv-addr=192.168.18.101 Data-addr= Remote-addr-list=[] MTU=1500 
INFO[2021-06-20T17:09:26.017103220Z] New memberlist node - Node:cerberus will use memberlist nodeID:a6184d6d063c with config:&{NodeID:a6184d6d063c Hostname:cerberus BindAddr:0.0.0.0 AdvertiseAddr:192.168.18.101 BindPort:0 Keys:[[133 149 5 128 180 60 145 233 205 137 110 154 33 56 79 133] [253 204 38 153 220 248 91 233 124 159 129 200 195 177 148 11] [173 108 87 3 41 3 11 163 140 104 168 29 202 220 128 248]] PacketBufferSize:1400 reapEntryInterval:1800000000000 reapNetworkInterval:1825000000000 StatsPrintPeriod:5m0s HealthPrintPeriod:1m0s} 
INFO[2021-06-20T17:09:26.017175329Z] Daemon has completed initialization          
INFO[2021-06-20T17:09:26.017550480Z] Node a6184d6d063c/192.168.18.101, joined gossip cluster 
INFO[2021-06-20T17:09:26.017697613Z] initialized VXLAN UDP port to 4789           
INFO[2021-06-20T17:09:26.017748963Z] Node a6184d6d063c/192.168.18.101, added to nodes list 
INFO[2021-06-20T17:09:26.110602183Z] API listen on /var/run/docker.sock           
ERRO[2021-06-20T17:09:27.078914257Z] error reading the kernel parameter net.ipv4.vs.conn_reuse_mode  error="open /proc/sys/net/ipv4/vs/conn_reuse_mode: no such file or directory"
ERRO[2021-06-20T17:09:27.078978866Z] error reading the kernel parameter net.ipv4.vs.expire_nodest_conn  error="open /proc/sys/net/ipv4/vs/expire_nodest_conn: no such file or directory"
ERRO[2021-06-20T17:09:27.079016983Z] error reading the kernel parameter net.ipv4.vs.expire_quiescent_template  error="open /proc/sys/net/ipv4/vs/expire_quiescent_template: no such file or directory"

On restart the containers are “Exited” and will not start.