Docker registry in "Restarting (1) " status forever

I have setup a docker(v 1.10) private registry with using registry:2 image in OEL6 using below command.

docker run -d -p 5000:5000 --restart=always --name bkdevregistry -v /var/lib/docker/certs/:/certs -e REGISTRY_HTTP_TLS_CERTIFICATE=/certs/xx.yy.com.crt -e REGISTRY_HTTP_TLS_CERTIFICATE=/certs/xx.yy.com.key registry:2

I have followed the official guide to create the certificates and setting up the registry.

The system got rebooted due to maintenance activities and after rebooting, the registry container is not working at all. It’s immediately going to the Restarting (1) status and is not changing its status.

[root@slcn09vmf0022 ~]# docker ps

CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
44ad9d09d210 registry:2 “/bin/registry /etc/d” 9 minutes ago Restarting (1) 3 minutes ago 0.0.0.0:5000->5000/tcp blkdevreg

Any suggestion to bring it to normal will be appreciated.

Logs:
time=“2016-05-18T15:29:34Z” level=fatal msg="open : no such file or directory"
time=“2016-05-18T17:18:47Z” level=warning msg=“No HTTP secret provided - generated random secret. This may cause problems with uploads if multiple registries are behind a load-balancer. To provide a shared secret, fill in http.secret in the configuration file or set the REGISTRY_HTTP_SECRET environment variable.” go.version=go1.5.3 instance.id=7034ae26-a1e8-4bc4-828a-be38d17a7ebb version=v2.3.1
time=“2016-05-18T17:18:47Z” level=info msg=“redis not configured” go.version=go1.5.3 instance.id=7034ae26-a1e8-4bc4-828a-be38d17a7ebb version=v2.3.1
time=“2016-05-18T17:18:47Z” level=info msg=“Starting upload purge in 51m0s” go.version=go1.5.3 instance.id=7034ae26-a1e8-4bc4-828a-be38d17a7ebb version=v2.3.1
time=“2016-05-18T17:18:47Z” level=info msg=“using inmemory blob descriptor cache” go.version=go1.5.3 instance.id=7034ae26-a1e8-4bc4-828a-be38d17a7ebb version=v2.3.

Hi, Did you find a fix for this? I’m facing the same issue.

Did any of you find a fix for this problem? I am facing same issue.

First check the logs why the container failed. Then probably you can build a new image with/without fix. Later execute below command

docker system prune

please attmpt exec ‘docker stop [container_ID]’ then ‘docker rm [container_ID]’