Docker Community Forums

Share and learn in the Docker community.

Docker images deleting itself

hi all

Am trying to setup Kubernetes cluster (i have done offline install of K8) on RHEL 7.3. I see my docker images are disappearing randomly, any help around this ?

docker info
Containers: 21
 Running: 12
 Paused: 0
 Stopped: 9
Images: 9
Server Version: 1.12.6
Storage Driver: devicemapper
 Pool Name: docker-253:8-12648726-pool
 Pool Blocksize: 65.54 kB
 Base Device Size: 10.74 GB
 Backing Filesystem: xfs
 Data file: /dev/loop0
 Metadata file: /dev/loop1
 Data Space Used: 1.226 GB
 Data Space Total: 107.4 GB
 Data Space Available: 878.9 MB
 Metadata Space Used: 1.991 MB
 Metadata Space Total: 2.147 GB
 Metadata Space Available: 878.9 MB
 Thin Pool Minimum Free Space: 10.74 GB
 Udev Sync Supported: true
 Deferred Removal Enabled: false
 Deferred Deletion Enabled: false
 Deferred Deleted Device Count: 0
 Data loop file: /var/lib/docker/devicemapper/devicemapper/data
 WARNING: Usage of loopback devices is strongly discouraged for production use. Use `--storage-opt dm.thinpooldev` to specify a custom block storage device.
 Metadata loop file: /var/lib/docker/devicemapper/devicemapper/metadata
 Library Version: 1.02.135-RHEL7 (2016-11-16)
Logging Driver: journald
Cgroup Driver: systemd
Plugins:
 Volume: local
 Network: bridge host null overlay
 Authorization: rhel-push-plugin
Swarm: inactive
Runtimes: docker-runc runc
Default Runtime: docker-runc
Security Options: seccomp
Kernel Version: 3.10.0-514.26.1.el7.x86_64
Operating System: Red Hat Enterprise Linux
OSType: linux
Architecture: x86_64
Number of Docker Hooks: 2
CPUs: 4
Total Memory: 7.64 GiB
Name: vstjenmast01
ID: LNLB:ZA3K:FVHH:NAR6:CBPI:CY46:J563:X5EV:YP2U:37UZ:DKEO:QLDU
Docker Root Dir: /var/lib/docker
Debug Mode (client): false
Debug Mode (server): false
Registry: https://registry.access.redhat.com/v1/
Insecure Registries:
 127.0.0.0/8
Registries: registry.access.redhat.com (secure), docker.io (secure)



these are the images i had earlier..

REPOSITORY                                 TAG                 IMAGE ID            CREATED             SIZE
quay.io/calico/node                        v2.6.12             401cc3e56a1a        3 months ago        281.2 MB
quay.io/calico/kube-controllers            v1.0.5              b647fdbe067d        3 months ago        52.44 MB
quay.io/calico/cni                         v1.11.8             5cf513c7d5ed        3 months ago        70.81 MB
k8s.gcr.io/kube-proxy-amd64                v1.10.3             4261d315109d        9 months ago        97.06 MB
k8s.gcr.io/kube-controller-manager-amd64   v1.10.3             40c8d10b2d11        9 months ago        147.9 MB
k8s.gcr.io/kube-apiserver-amd64            v1.10.3             e03746fe22c3        9 months ago        225.1 MB
k8s.gcr.io/kube-scheduler-amd64            v1.10.3             353b8f1d102e        9 months ago        50.41 MB
k8s.gcr.io/etcd-amd64                      3.1.12              52920ad46f5b        11 months ago       193.2 MB
registry                                   latest              d1fd7d86a825        13 months ago       33.26 MB
k8s.gcr.io/pause-amd64                     3.1                 da86e6ba6ca1        14 months ago       742.5 kB
quay.io/coreos/etcd                        v3.1.1              47bb9dd99916        19 months ago       34.56 MB

at the moment i see

docker images
REPOSITORY                                 TAG                 IMAGE ID            CREATED             SIZE
quay.io/calico/node                        v2.6.12             401cc3e56a1a        3 months ago        281.2 MB
quay.io/calico/kube-controllers            v1.0.5              b647fdbe067d        3 months ago        52.44 MB
quay.io/calico/cni                         v1.11.8             5cf513c7d5ed        3 months ago        70.81 MB
k8s.gcr.io/kube-proxy-amd64                v1.10.3             4261d315109d        9 months ago        97.06 MB
k8s.gcr.io/kube-controller-manager-amd64   v1.10.3             40c8d10b2d11        9 months ago        147.9 MB
k8s.gcr.io/kube-scheduler-amd64            v1.10.3             353b8f1d102e        9 months ago        50.41 MB
k8s.gcr.io/etcd-amd64                      3.1.12              52920ad46f5b        11 months ago       193.2 MB
registry                                   latest              d1fd7d86a825        13 months ago       33.26 MB
k8s.gcr.io/pause-amd64                     3.1                 da86e6ba6ca1        14 months ago       742.5 kB

k8s.gcr.io/kube-apiserver-amd64 has disappeared. I have tried redeploying but same issues...



journalctl docker

Feb 28 13:50:18 masterhost dockerd-current[7365]: 2019-02-28T13:50:18Z masterhost confd[89]: ERROR client: etcd cluster is unavailable or misconfigured; error #0: dial tcp 10.96.232.136:6666: getsockopt: connection refused
Feb 28 13:50:18 masterhost dockerd-current[7365]:
Feb 28 13:50:18 masterhost dockerd-current[7365]: 2019-02-28T13:50:18Z masterhost confd[89]: ERROR client: etcd cluster is unavailable or misconfigured; error #0: dial tcp 10.96.232.136:6666: getsockopt: connection refused
Feb 28 13:50:18 masterhost dockerd-current[7365]:
Feb 28 13:50:18 masterhost dockerd-current[7365]: 2019-02-28T13:50:18Z masterhost confd[89]: ERROR client: etcd cluster is unavailable or misconfigured; error #0: dial tcp 10.96.232.136:6666: getsockopt: connection refused
Feb 28 13:50:18 masterhost dockerd-current[7365]:
Feb 28 13:50:18 masterhost dockerd-current[7365]: 2019-02-28T13:50:18Z masterhost confd[89]: ERROR client: etcd cluster is unavailable or misconfigured; error #0: dial tcp 10.96.232.136:6666: getsockopt: connection refused
Feb 28 13:50:18 masterhost dockerd-current[7365]:
Feb 28 13:50:19 masterhost dockerd-current[7365]: E0228 13:50:19.123528       1 leaderelection.go:224] error retrieving resource lock kube-system/kube-controller-manager: Get https://11.159.183.96:6443/api/v1/namespaces/kube-system/end
Feb 28 13:50:19 masterhost dockerd-current[7365]: 2019-02-28 13:50:19.146 [WARNING][88] syncer.go 353: Net error from etcd error=dial tcp 10.96.232.136:6666: getsockopt: connection refused
Feb 28 13:50:19 masterhost dockerd-current[7365]: 2019-02-28 13:50:19.147 [WARNING][88] syncer.go 284: May be tight looping, throttling retries.
Feb 28 13:50:19 masterhost dockerd-current[7365]: 2019-02-28 13:50:19.276 [WARNING][88] syncer.go 372: Failed to poll etcd server cluster ID error=client: etcd cluster is unavailable or misconfigured; error #0: dial tcp 10.96.232.136:6
Feb 28 13:50:19 masterhost dockerd-current[7365]:
Feb 28 13:50:19 masterhost dockerd-current[7365]: E0228 13:50:19.388335       1 reflector.go:205] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:594: Failed to list *v1.Pod: Get https://11.159.183.96:6443/api/v1/pods?fieldSelector=
Feb 28 13:50:19 masterhost dockerd-current[7365]: E0228 13:50:19.389070       1 reflector.go:205] k8s.io/kubernetes/vendor/k8s.io/client-go/informers/factory.go:87: Failed to list *v1.Node: Get https://11.159.183.96:6443/api/v1/nodes?l
Feb 28 13:50:19 masterhost dockerd-current[7365]: E0228 13:50:19.390082       1 reflector.go:205] k8s.io/kubernetes/vendor/k8s.io/client-go/informers/factory.go:87: Failed to list *v1.PersistentVolume: Get https://11.159.183.96:6443/ap
Feb 28 13:50:19 masterhost dockerd-current[7365]: E0228 13:50:19.391249       1 reflector.go:205] k8s.io/kubernetes/vendor/k8s.io/client-go/informers/factory.go:87: Failed to list *v1beta1.ReplicaSet: Get https://11.159.183.96:6443/api
Feb 28 13:50:19 masterhost dockerd-current[7365]: E0228 13:50:19.392294       1 reflector.go:205] k8s.io/kubernetes/vendor/k8s.io/client-go/informers/factory.go:87: Failed to list *v1.Service: Get https://11.159.183.96:6443/api/v1/serv
Feb 28 13:50:19 masterhost dockerd-current[7365]: E0228 13:50:19.393350       1 reflector.go:205] k8s.io/kubernetes/vendor/k8s.io/client-go/informers/factory.go:87: Failed to list *v1.ReplicationController: Get https://11.159.183.96:64
Feb 28 13:50:19 masterhost dockerd-current[7365]: E0228 13:50:19.394442       1 reflector.go:205] k8s.io/kubernetes/vendor/k8s.io/client-go/informers/factory.go:87: Failed to list *v1.PersistentVolumeClaim: Get https://11.159.183.96:64
Feb 28 13:50:19 masterhost dockerd-current[7365]: E0228 13:50:19.395526       1 reflector.go:205] k8s.io/kubernetes/vendor/k8s.io/client-go/informers/factory.go:87: Failed to list *v1.StorageClass: Get https://11.159.183.96:6443/apis/s
Feb 28 13:50:19 masterhost dockerd-current[7365]: E0228 13:50:19.396597       1 reflector.go:205] k8s.io/kubernetes/vendor/k8s.io/client-go/informers/factory.go:87: Failed to list *v1beta1.PodDisruptionBudget: Get https://11.159.183.96
Feb 28 13:50:19 masterhost dockerd-current[7365]: E0228 13:50:19.397606       1 reflector.go:205] k8s.io/kubernetes/vendor/k8s.io/client-go/informers/factory.go:87: Failed to list *v1beta1.StatefulSet: Get https://11.159.183.96:6443/ap
Feb 28 13:50:19 masterhost dockerd-current[7365]: E0228 13:50:19.676174       1 reflector.go:205] k8s.io/kubernetes/pkg/client/informers/informers_generated/internalversion/factory.go:86: Failed to list *core.Service: Get https://11.15
Feb 28 13:50:19 masterhost dockerd-current[7365]: E0228 13:50:19.677083       1 reflector.go:205] k8s.io/kubernetes/pkg/client/informers/informers_generated/internalversion/factory.go:86: Failed to list *core.Endpoints: Get https://11.
Feb 28 13:50:20 masterhost dockerd-current[7365]: 2019-02-28 13:50:20.147 [WARNING][88] syncer.go 353: Net error from etcd error=dial tcp 10.96.232.136:6666: getsockopt: connection refused
Feb 28 13:50:20 masterhost dockerd-current[7365]: 2019-02-28 13:50:20.147 [WARNING][88] syncer.go 284: May be tight looping, throttling retries.
Feb 28 13:50:20 masterhost dockerd-current[7365]: E0228 13:50:20.388887       1 reflector.go:205] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:594: Failed to list *v1.Pod: Get https://11.159.183.96:6443/api/v1/pods?fieldSelector=
Feb 28 13:50:20 masterhost dockerd-current[7365]: E0228 13:50:20.389727       1 reflector.go:205] k8s.io/kubernetes/vendor/k8s.io/client-go/informers/factory.go:87: Failed to list *v1.Node: Get https://11.159.183.96:6443/api/v1/nodes?l
Feb 28 13:50:20 masterhost dockerd-current[7365]: E0228 13:50:20.390765       1 reflector.go:205] k8s.io/kubernetes/vendor/k8s.io/client-go/informers/factory.go:87: Failed to list *v1.PersistentVolume: Get https://11.159.183.96:6443/ap
Feb 28 13:50:20 masterhost dockerd-current[7365]: E0228 13:50:20.391803       1 reflector.go:205] k8s.io/kubernetes/vendor/k8s.io/client-go/informers/factory.go:87: Failed to list *v1beta1.ReplicaSet: Get https://11.159.183.96:6443/api
Feb 28 13:50:20 masterhost dockerd-current[7365]: E0228 13:50:20.392855       1 reflector.go:205] k8s.io/kubernetes/vendor/k8s.io/client-go/informers/factory.go:87: Failed to list *v1.Service: Get https://11.159.183.96:6443/api/v1/serv
Feb 28 13:50:20 masterhost dockerd-current[7365]: E0228 13:50:20.393913       1 reflector.go:205] k8s.io/kubernetes/vendor/k8s.io/client-go/informers/factory.go:87: Failed to list *v1.ReplicationController: Get https://11.159.183.96:64
Feb 28 13:50:20 masterhost dockerd-current[7365]: E0228 13:50:20.395067       1 reflector.go:205] k8s.io/kubernetes/vendor/k8s.io/client-go/informers/factory.go:87: Failed to list *v1.PersistentVolumeClaim: Get https://11.159.183.96:64
Feb 28 13:50:20 masterhost dockerd-current[7365]: E0228 13:50:20.396132       1 reflector.go:205] k8s.io/kubernetes/vendor/k8s.io/client-go/informers/factory.go:87: Failed to list *v1.StorageClass: Get https://11.159.183.96:6443/apis/s
Feb 28 13:50:20 masterhost dockerd-current[7365]: E0228 13:50:20.397212       1 reflector.go:205] k8s.io/kubernetes/vendor/k8s.io/client-go/informers/factory.go:87: Failed to list *v1beta1.PodDisruptionBudget: Get https://11.159.183.96
Feb 28 13:50:20 masterhost dockerd-current[7365]: E0228 13:50:20.398272       1 reflector.go:205] k8s.io/kubernetes/vendor/k8s.io/client-go/informers/factory.go:87: Failed to list *v1beta1.StatefulSet: Get https://11.159.183.96:6443/ap
Feb 28 13:50:20 masterhost dockerd-current[7365]: 2019-02-28 13:50:20.588 [INFO][88] health.go 150: Overall health summary=&health.HealthReport{Live:true, Ready:true}
Feb 28 13:50:20 masterhost dockerd-current[7365]: E0228 13:50:20.676705       1 reflector.go:205] k8s.io/kubernetes/pkg/client/informers/informers_generated/internalversion/factory.go:86: Failed to list *core.Service: Get https://11.15
Feb 28 13:50:20 masterhost dockerd-current[7365]: E0228 13:50:20.677651       1 reflector.go:205] k8s.io/kubernetes/pkg/client/informers/informers_generated/internalversion/factory.go:86: Failed to list *core.Endpoints: Get https://11.
Feb 28 13:50:20 masterhost dockerd-current[7365]: 2019-02-28T13:50:20Z masterhost confd[89]: ERROR client: etcd cluster is unavailable or misconfigured; error #0: dial tcp 10.96.232.136:6666: getsockopt: connection refused
Feb 28 13:50:20 masterhost dockerd-current[7365]:
Feb 28 13:50:20 masterhost dockerd-current[7365]: 2019-02-28T13:50:20Z masterhost confd[89]: ERROR client: etcd cluster is unavailable or misconfigured; error #0: dial tcp 10.96.232.136:6666: getsockopt: connection refused
Feb 28 13:50:20 masterhost dockerd-current[7365]:
Feb 28 13:50:20 masterhost dockerd-current[7365]: 2019-02-28T13:50:20Z masterhost confd[89]: ERROR client: etcd cluster is unavailable or misconfigured; error #0: dial tcp 10.96.232.136:6666: getsockopt: connection refused
Feb 28 13:50:20 masterhost dockerd-current[7365]:
Feb 28 13:50:20 masterhost dockerd-current[7365]: 2019-02-28T13:50:20Z masterhost confd[89]: ERROR client: etcd cluster is unavailable or misconfigured; error #0: dial tcp 10.96.232.136:6666: getsockopt: connection refused
Feb 28 13:50:20 masterhost dockerd-current[7365]:
Feb 28 13:50:20 masterhost dockerd-current[7365]: 2019-02-28T13:50:20Z masterhost confd[89]: ERROR client: etcd cluster is unavailable or misconfigured; error #0: dial tcp 10.96.232.136:6666: getsockopt: connection refused
Feb 28 13:50:20 masterhost dockerd-current[7365]:
Feb 28 13:50:20 masterhost dockerd-current[7365]: 2019-02-28T13:50:20Z masterhost confd[89]: ERROR client: etcd cluster is unavailable or misconfigured; error #0: dial tcp 10.96.232.136:6666: getsockopt: connection refused
Feb 28 13:50:20 masterhost dockerd-current[7365]:
Feb 28 13:50:20 masterhost dockerd-current[7365]: 2019-02-28T13:50:20Z masterhost confd[89]: ERROR client: etcd cluster is unavailable or misconfigured; error #0: dial tcp 10.96.232.136:6666: getsockopt: connection refused
Feb 28 13:50:20 masterhost dockerd-current[7365]:
Feb 28 13:50:20 masterhost dockerd-current[7365]: 2019-02-28T13:50:20Z masterhost confd[89]: ERROR client: etcd cluster is unavailable or misconfigured; error #0: dial tcp 10.96.232.136:6666: getsockopt: connection refused
Feb 28 13:50:20 masterhost dockerd-current[7365]:
Feb 28 13:50:20 masterhost dockerd-current[7365]: 2019-02-28T13:50:20Z masterhost confd[89]: ERROR client: etcd cluster is unavailable or misconfigured; error #0: dial tcp 10.96.232.136:6666: getsockopt: connection refused
Feb 28 13:50:20 masterhost dockerd-current[7365]:
Feb 28 13:50:20 masterhost dockerd-current[7365]: 2019-02-28T13:50:20Z masterhost confd[89]: ERROR client: etcd cluster is unavailable or misconfigured; error #0: dial tcp 10.96.232.136:6666: getsockopt: connection refused
Feb 28 13:50:20 masterhost dockerd-current[7365]:
Feb 28 13:50:20 masterhost dockerd-current[7365]: 2019-02-28T13:50:20Z masterhost confd[89]: ERROR client: etcd cluster is unavailable or misconfigured; error #0: dial tcp 10.96.232.136:6666: getsockopt: connection refused

It looks like you do not have enough disk space left. Kubelet automatically deletes unused images when disk usage exceeds 85%.
https://kubernetes.io/docs/concepts/cluster-administration/kubelet-garbage-collection/#user-configuration

1 Like