Image showing in ps command but not showing in images command!

I am having a problem with a docker image. I am having an image running on a container which is showing when I run the “docker ps” command as follows:
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
492486f69860 a89adc4d59fb “bash -c 'export LAN…” XX years ago Up 54 minutes some_name

As given above, the value of the image field is the image ID and not the image name! However, when I run the the “docker images” command there is no image with the given image ID showing up! When I try to save the image into a tar file to move it to another machine and enter the image ID showing in the ps command it gives me this error:

Error response from daemon: No such image: a89adc4d59fb

Can you please assist on this?

Please share the output of docker ps and docker image ls. Please wrap the outputs in “Preformated text” blocks (gear icon → </> icon) for better readability.

There is one way I could reproduce this issue.

  • Start the container from an image using the image name
  • Stop the container
  • Remove the image using the --force option
  • Start the container again

The list of images is a list of objects made of metadata and one or more layers of filesystem. Usually overlay2.
There is a json file: /var/lib/docker/image/overlay2/repositories.json which contains the tags and the image IDs.

A container is also a object made of metadata and the filesystem layers of the image and an additional writable layer for the container.

When you delete the image forcibly as I wrote above, the filesystem layers will not be deleted since a container still uses it. Some image layer metadata will still be there too, but the final image definition will be deleted and the reference will be deleted from the json file so when you run docker image ls the output will be empty. You could actually add the tag back in the json file and restart docker, but then your container would stop unless live migration is enabled. I have a github repo in which I wrote about this json file.

There is an easier way though. If you know the name of the image that the container used and you are lucky enough that the image was not updated in the Docker registry, you can pull the image again so docker ps will know about it again. Or if the original image was updated but you still know its digest, you can refer to that and download that specific version:

https://docs.docker.com/engine/reference/commandline/pull/#pull-an-image-by-digest-immutable-identifier

1 Like

Hello meyay,

The following is the outputs of the “docker ps”:

$ docker ps 
CONTAINER ID   IMAGE                COMMAND                  CREATED       STATUS        PORTS                                            NAMES
492486f69860   a89adc4d59fb         "bash -c 'export LAN…"   2 years ago   Up 22 hours   80                 				  app1
29261b6b2b97   sitewebapp_web_app   "bash -c ' python3 /…"   2 years ago   Up 22 hours   9000					          app2
b9293a8052de   a89adc4d59fb         "bash -c 'export LAN…"   2 years ago   Up 22 hours   9000					          app3

And the following is the output of the “docker image ls” command:

$ docker image ls
REPOSITORY                TAG                      IMAGE ID       CREATED         SIZE
app_app                   latest                   e4d133ae9137   14 months ago   2.04GB
tensorflow/tensorflow     1.15.4-gpu-py3           a1e8e97ee677   2 years ago     3.58GB
app2		          latest                   99ee3fbe0eb7   2 years ago     519MB
nvidia/cuda               11.0-base                2ec708416bb8   2 years ago     122MB
ubuntu                    18.04                    c3c304cb4f22   2 years ago     64.2MB
pytorch/pytorch           latest                   37b81722dadc   2 years ago     4.16GB
tensorflow/tensorflow     latest-gpu-jupyter       7d8da1368867   2 years ago     4.23GB
tensorflow/tensorflow     latest-gpu-py3-jupyter   ce8f7398433c   2 years ago     4.26GB
hello-world               latest                   bf756fb1ae65   2 years ago     13.3kB
continuumio/anaconda      latest                   d343e59299f1   2 years ago     2.61GB
tensorflow/tensorflow     latest-gpu-py3           a7a1861d2150   3 years ago     3.51GB

As it can be seen, the two images “a89adc4d59fb” listed under the ps command do not show up in the “docker image ls” output.

Thank you very much rimelek, very informative answer. However, after investigating the json file, I could not see the image id of the images showing up in the ps command! Do you mean by add the tag back that the image id will be there but the tag is missing or the complete entry is missing?

The reason must be what @rimelek already wrote!

Also: make sure to regularly update (remove/re-create) your containers to use new versions of your images to get all vulnerability fixes.

I meant that. If you check the GitHub link I shared, you will see that the ID is the value, not the key in the json so you can’t have an ID without having the tag in the json.

{
  "Repositories": {
    "localhost/buildtest": {
      "localhost/buildtest:v5": "sha256:18391a6e324a1b804a02d7c07b303b68925ed6971bc955e64f4acd17f67d2b00"
    }
  }
}

The first level is the repository, the second is the full reference with the version tag.

Have you tried the “easier way” I mentioned? Pulling the image again won’t harm your currently running container even if the image was updated in the registry. The tag will just point to a different ID, not the one you see in the output of docker ps.

1 Like

Hello rimelek,

Unfortunately, I cannot try to easier way as the image was built and not pulled from the registry. It was built on another machine an migrated to a different machine. I am not the one who built it. This complicates the problem. Going back to your description, if the image layers were deleted how would the container run again if the system is rebooted? The container runs at boot time. I rebooted the machine and the container runs again.

Nothing will be deleted that is required by the container to run and use the filesystem. Only some metadata and references to recognize the image itself.

Think of it as Git (not the same, but it is similar). It helps if you know Git and Docker because they have similar subcommands like commit, push, pull.

  • When you pull an image from the registry you have an image tag which basically points to a filesystem layer (not directly). In addition to that, you will have some metadata, but that is not important now. This is similar to how Git uses a tag or branch name to point to a commit hash. Git will not garbage collect the commits (and previous commits) until you have a reference to that
  • When you run a container, you will use the fileystem layers of the image but those layers are not writable from the container.
  • Every layer has a folder that you could see from the host and also write it if you wanted to make the image unusable.
  • Each layer contains different files and these layers together will give you a base filesystem which you can see as one root filesystem of a light weight Linux distribution.
  • Your container will have its own writable layer and every time you want to change a file on the filesystem of the image, Docker will copy it to the layer of the container (just an other folder on the host) and change that version. You don’t see that because every layer on top of an other can hide files on the lower layers.
  • When you delete a file, Docker (actually not Docker but the overlay2 storage driver) will save a special file which means that file with the same name on a lower layer is deleted.
  • What you can see from the container is multiple folders mounted into the same destination folder.

So if you delete the metadata of an image, you will still have the filesystem, so the folders that the container uses and knows about. If you remove the container then you will have no way to tell which layers you need, but the container already knows that.

What I referred to as “image layer metadata” is the information that was saved when you built the image. Since building a container with multiple instructions in the Dockerfile means you run multiple containers and run docker commit after each step, each instruction will create a file like that. When you run docker image inspect you basically read the latest file.

The container also has a metadata file:

/var/lib/docker/containers/<containerid>/config.v2.json

You can also find the container id as a name of an other folder in which there is a “parent-id” file which points to the parent layer:

/image/overlay2/layerdb/mounts/<containerid>/parent

If you search for the value in that file, you will find an other folder with that name but not under mounts, because this id is now the layer of the image:

/var/lib/docker/image/overlay2/layerdb/sha256/<layerid>

I don’t want to continue this deep explanation, because it would be more complicated I can’t say I understand everything. The point is there are many references and you don’t need all of them for a container to work when you already started it.

If you can rebuild the image or get the previously built image export again, you can load it again on your server and that should fix the name too. If it does not give a name to the ID, you will at least have a complete image and you can add a tag manually using docker tag <imageid> <imagename>

Thank you rimelek for the comprehensive explanation. Your point is clear now.