20 seconds later it comes back
$ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
15832b6ce7fa graphiteapp/graphite-statsd “/entrypoint” 12 seconds ago Up 10 seconds 0.0.0.0:80->80/tcp, :::80->80/tcp, 0.0.0.0:2003-2004->2003-2004/tcp, :::2003-2004->2003-2004/tcp, 2013-2014/tcp, 8080/tcp, 0.0.0.0:2023-2024->2023-2024/tcp, :::2023-2024->2023-2024/tcp, 0.0.0.0:8126->8126/tcp, :::8126->8126/tcp, 8125/tcp, 0.0.0.0:8125->8125/udp, :::8125->8125/udp graphite
I have also tried deleting via Portainer but it keeps recreating. I want this app gone.
How did you create it? Docker itself would not recreate containers. Docker swarm could. I am not sure about Portainer, but Portainer supports Docker swarm. Based on your output I don’t think it is Docker swarm.
I do remember the same effect can be seen with docker-compose deployments that use restart: always. But with docker-compose a simple docker-compose down would have removed everything without issues.
Are you sure the container was not created using docker-compose?!
I just tested docker run -d --name x --restart=always ... and a stop + rm deleted the container: I could not see the behavior you observe. The container is deleted and does not restart. I couldn’t remember this behavior with containers created using docker run - I retested it, and i still can’t reproduce the behavior you are seeing. In theory It shouldn’t matter which image is used - the behavior should be identical regardless which image is used.
I tried with Docker Compose and could not reproduce the error. I can’t imagine why docker compose would recreate a container after deleting it. It doesn’t have a continuously running service to create something after we removed it completely.
Just a though: what if the container terminates with an error code, which triggers a rescheduling of a new container? Maybee this is the reason why my docker run --restart=always.... didn’t reproduce the problem, but the original image does? (I haven’t tested the image from the OP)
I do remember the situation where deleting containers started by compose couldn’t be removed because they have been immediatly rescheduled. Been there, failed with docker rm and then use docker-compose down to get rid of the containers. Seems like I should have analysed it back then.
Actually when I stopped the graphite container it terminated with an error code every time but didn’t start again. I waited for more than a minute. But I tried it only because I could not believe that it does not restart after docker stop. If that was the case docker rm could not have worked. But it worked. But if the container disappears completely from the machine, I don’t see how it could be recreated without an external service. Then I tried it with an httpd image. I started it using docker-compose and stopped it using docker. Did not start again.
So it is obvious I don’t know how the restart policy work. Especially always.
Nah, your observations are just fine. Now I where I come to think of it: I only saw that behavior on my Synology NAS in combination with docker-compose. But the Synology docker engine commes with modifcations, and this behavior I remembered might have been just a side effect of that. Though, I jsut tested it on DSM7 and Syno’s latest docker package: not reproducable as well.
You are right about the restart policy. The information is stored in the container’s specs and as such should disappear with the container itself. The difference between a container created by docker run and docker-compose are marely container labels set by docker compose - appart of that there is no difference.
And you are right about swarm as well: the service definiation is decoupled from the service task - which itself is responsible to create the container (if you think of it it’s the couterpart of a pod, but only for exactly one container). Deleting the container will just trigger the scheduler to create a new service task.
I agree with @meyay
You must have intentionally or unintentionally installed something that recreates containers. It is either a systemd service or something similar. You mentioned that you used portainer. Since I don’t use that I can’t tell you how could that be cause but I can imagine. If you can’t start the root cause, I can install portainer later but I wouldn’t do that now.