Task container stopped but seen as still running in docker cloud


I’ve an app I want to put on docker cloud that needs some task containers for things like TLS certificate renewal or backups. When trying my container for TLS (using Let’s Encrypt client certbot), the container does it’s job then stops, but is still seen as running in Docker Cloud console, and it’s service marked as Running too. Docker Cloud seems to see that my container/service is stopped only if I enable ‘autodestroy’. Is there any reason for that ?

I want to have a cron container for scheduling those task containers, but they need to be properly stopped after their work for the cron container to be able to make api calls to start them again when needed.

Thanks for any help :slight_smile:

Same problem here, possibly this issue only started after upgrading the node to Docker 1.11.2-cs5.

I’m starting a container every 10 minutes using a tutum/cron fork. This always worked fine until I upgraded to 1.11 via Docker Cloud.

Now, about once a day, the cron script cannot restart the service via the API with “ERROR: Service cannot be started in current state Running”. In fact, Docker Cloud shows the service as running. But docker ps -a reveals its status is Exited (0). This lasts for about an hour, then the service seems to get that the container stopped.

This is happening on a daily basis now. Never happened before I upgraded to 1.11. Any thoughts?

On my side I didn’t changed anything but it seems to work now. Maybe they made a fix without telling us.

Same for me, the container is not running and is listed as running. The issue is not only with the docker cloud UI both even with the docker cloud API where the state is “running”.

This issue is fixed daily and reopened daily.
I have a Jenkins Job going 24/7 and the issue normally happens for ~3 hours each day. The issue happens at random times, and I think it’s a problem of synchronisation between Docker Cloud and the docker machine on AWS, rebooting the AWS machine sometimes solve the problem.

This issue for me is critical because stop the entire pipeline.

I have the exact same problem. I have a container running that should be starting others via the API on a cron schedule but the cloud API reports them as running and thus errors occur and tasks are not executed on schedule. I will play with the autodestroy option. Means slower start up which is annoying but hopefully will make clear that the container has stopped.

It still happens here for ~20 minutes, every single day. Sooo annoying.

AutoDestroy won’t help, because then your service will be gone, or am I wrong?

I used to redeploy the service, instead of (re-)starting it from the cron, which helps, but has its own disadvantages. Basically, you don’t want to redeploy a service every 5 minutes. The service will become completely unavailable when Docker Hub is down.

If only someone from Docker would look into this issue, or at least comment on it…

You’re correct, autodestroy does not help. I will adjust the cron job to first issue a stop API call and then a start call. This will work in my case but could be annoying if you have long running tasks that are interrupted this way (although I suppose they should be on a longer schedule then).

So I worked around this problem by creating cron tasks that first stop the target service, then sleep for 10 s, and then start the service. Feels cludgy and highly annoying but it works. You can see my work over on github, particularly in the run.sh script.

1 Like

I’m running into this problem, so it looks like it’s still an issue. Task containers are exiting with 0 and still seen as “running” in Cloud. Any idea what’s going on here, Docker Staff?

Is it docker cloud or docker related issue as according to docker stats container has state “Running” but docker (as also cloud UI) can not kill or stop it (docker logs and ssh does not work also). The only thing which helps is to restart dockecloud agent directly on the node and redeploy of affected applications from cloud UI. It happens randomly with different jobs. Is there any resolution in sight or docker cloud development has be ultimately dropped?

By using docker ps -a I found my container is running with a different id, don’t know why. Then I stop the container by that id, and things be fine.

This is so amazing blog i find here. All the information are so good to read. You can shop hosting with $1 web hosting and save your maximum amount of cash.

You can also get Free Payoneer Bonus when you’ll register for Payoneer.

Hello, I am also facing the same issue on https://www.zainhosting.com/
have you resolved this?


I have exactly the inverse problem. ( but not using docker cloud )

When my server reboot, “docker ps -a” as well as portainer show all my dockers container stopped, but they are all running fine.

If i use “systemctl docker restart” all goes fine.

Installed: 5:19.03.5~3-0~ubuntu-bionic
Candidate: 5:19.03.5~3-0~ubuntu-bionic

Ubuntu 18.04