Unable to delete Graphite - docker keeps recreating the image and container

I installed Graphite [graphiteapp/graphite-statsd] but I can’t delete it at all. It keep recreating itself when I delete it.

Steps carried out:

docker stop graphite
docker rm graphite
$ docker rmi 740025f5293b
Untagged: graphiteapp/graphite-statsd:latest
Untagged: graphiteapp/graphite-statsd@sha256:0e64da97269857ad32367f1c18f85f4fc9d3243e18b71e6c07522fa7d0f6e739
Deleted: sha256:740025f5293be15b2ae1487313c4e2f217e25d3666bce12b89cfa29b00e4ca88
Deleted: sha256:c7472ef4d262e8ef642ebff27170c32b630dcaf21436db7c4296e73c07e6e1a7
Deleted: sha256:4245935fec8b063c9bd21c7158038f26342fe34d7b0d7e82d3af6c215b135aaa
Deleted: sha256:d258f926924ab608c8bfc9eaca510d5143e0b2a63d979f93e7488c558349d818
Deleted: sha256:a1330ef983b861f754707683d16b05fe1635f5ea556debaa7a45d59af469855f

20 seconds later it comes back
$ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
15832b6ce7fa graphiteapp/graphite-statsd “/entrypoint” 12 seconds ago Up 10 seconds 0.0.0.0:80->80/tcp, :::80->80/tcp, 0.0.0.0:2003-2004->2003-2004/tcp, :::2003-2004->2003-2004/tcp, 2013-2014/tcp, 8080/tcp, 0.0.0.0:2023-2024->2023-2024/tcp, :::2023-2024->2023-2024/tcp, 0.0.0.0:8126->8126/tcp, :::8126->8126/tcp, 8125/tcp, 0.0.0.0:8125->8125/udp, :::8125->8125/udp graphite

I have also tried deleting via Portainer but it keeps recreating. I want this app gone.

How did you create it? Docker itself would not recreate containers. Docker swarm could. I am not sure about Portainer, but Portainer supports Docker swarm. Based on your output I don’t think it is Docker swarm.

I am very new to Docker.

I don’t remember installing a Docker Swarm. When I run a swarm command I get an output: This host is not part of a swarm

Based on the run command in the instruction this should defuse the restart policy, stop the container and remove it:

docker container update --restart=no graphite
docker container stop graphite
docker container rm graphite

I do remember the same effect can be seen with docker-compose deployments that use restart: always. But with docker-compose a simple docker-compose down would have removed everything without issues.

Are you sure the container was not created using docker-compose?!

I just tested docker run -d --name x --restart=always ... and a stop + rm deleted the container: I could not see the behavior you observe. The container is deleted and does not restart. I couldn’t remember this behavior with containers created using docker run - I retested it, and i still can’t reproduce the behavior you are seeing. In theory It shouldn’t matter which image is used - the behavior should be identical regardless which image is used.

I tried with Docker Compose and could not reproduce the error. I can’t imagine why docker compose would recreate a container after deleting it. It doesn’t have a continuously running service to create something after we removed it completely.

Just a though: what if the container terminates with an error code, which triggers a rescheduling of a new container? Maybee this is the reason why my docker run --restart=always.... didn’t reproduce the problem, but the original image does? (I haven’t tested the image from the OP)

I do remember the situation where deleting containers started by compose couldn’t be removed because they have been immediatly rescheduled. Been there, failed with docker rm and then use docker-compose down to get rid of the containers. Seems like I should have analysed it back then.

Actually when I stopped the graphite container it terminated with an error code every time but didn’t start again. I waited for more than a minute. But I tried it only because I could not believe that it does not restart after docker stop. If that was the case docker rm could not have worked. But it worked. But if the container disappears completely from the machine, I don’t see how it could be recreated without an external service. Then I tried it with an httpd image. I started it using docker-compose and stopped it using docker. Did not start again.

So it is obvious I don’t know how the restart policy work. Especially always.

Nah, your observations are just fine. Now I where I come to think of it: I only saw that behavior on my Synology NAS in combination with docker-compose. But the Synology docker engine commes with modifcations, and this behavior I remembered might have been just a side effect of that. Though, I jsut tested it on DSM7 and Syno’s latest docker package: not reproducable as well.

You are right about the restart policy. The information is stored in the container’s specs and as such should disappear with the container itself. The difference between a container created by docker run and docker-compose are marely container labels set by docker compose - appart of that there is no difference.

And you are right about swarm as well: the service definiation is decoupled from the service task - which itself is responsible to create the container (if you think of it it’s the couterpart of a pod, but only for exactly one container). Deleting the container will just trigger the scheduler to create a new service task.

No, this didn’t work. The container restarted itself

Commands used:
386 docker container update --restart=no graphite
387 docker container stop graphite
388 docker container rm graphite
389 docker images -a
390 docker rmi 740025f5293b
391 docker ps -a

docker ps -a
CONTAINER ID   IMAGE                                          COMMAND                  CREATED         STATUS         PORTS                                                                                                                                                                                                                                                                                      NAMES
58386c8cbf9e   graphiteapp/graphite-statsd                    "/entrypoint"            3 seconds ago   Up 2 seconds   0.0.0.0:80->80/tcp, :::80->80/tcp, 0.0.0.0:2003-2004->2003-2004/tcp, :::2003-2004->2003-2004/tcp, 2013-2014/tcp, 8080/tcp, 0.0.0.0:2023-2024->2023-2024/tcp, :::2023-2024->2023-2024/tcp, 0.0.0.0:8126->8126/tcp, :::8126->8126/tcp, 8125/tcp, 0.0.0.0:8125->8125/udp, :::8125->8125/udp   graphite
3449f51a7582   linuxserver/unifi-controller:latest            "/init"                  2 days ago      Up 2 days      8843/tcp, 0.0.0.0:3478->3478/udp, :::3478->3478/udp, 0.0.0.0:10001->10001/udp, :::10001->10001/udp, 0.0.0.0:8443->8443/tcp, :::8443->8443/tcp, 8880/tcp, 0.0.0.0:6080->8080/tcp, :::6080->8080/tcp                                                                                         Unifi-Controller

I’ve never used Docker Compose (still learning Docker - so don’t know how to use it yet). I used the CLI commands.

The last thing I can think of is if you added a systemd service to start the container.
Other then that: I am officialy out of ideas!

I agree with @meyay
You must have intentionally or unintentionally installed something that recreates containers. It is either a systemd service or something similar. You mentioned that you used portainer. Since I don’t use that I can’t tell you how could that be cause but I can imagine. If you can’t start the root cause, I can install portainer later but I wouldn’t do that now.

I can delete other containers, just not this one. I will rebuild the OS and reinstall all the other containers.