List only outdated images`

Current Situation :

When I run vscode I could see these marks for outdated container
So I could update what I need
Screenshot_20230817_115135

Mainly I update with this command

docker images | grep -v ^REPO | sed 's/ \+/:/g' | cut -d: -f1,2 |
    xargs -L1 docker pull;
docker system prune -f;
docker system df;
docker images;
echo;
docker ps -s;

This command fetches updates for every single tag of every image even if it’s not outdated
that takes much time and hold my tiny dev server for some minutes ( it’s network is not good )

I need to replace

docker images

in that script with any other way to filter only outdated images

My quetion :

Anyway to list outdated containers ?

Not really. Though, you could use something like watchtower or diun to let you notify that there are new images for the repo tags you use.

Though, personally, I would just set the pull policy to always, so whenever you start a container, it pulls the most recent image before the container is created.

docker run --pull=always

see: https://docs.docker.com/engine/reference/commandline/run/#pull

services:
  service:
    ...
    pull_policy: always
    ...

see: https://docs.docker.com/compose/compose-file/05-services/#pull_policy

2 Likes

Thanks for recommendaions,
I tried them
watchtower is good but adds more container running to the system and a continous link to registries

I’m using custom images so I need to build them against upstream updates first also
also my network is not good

what I need is to filter local images which one is outdated then run docker pull against them only

I found docker images filter option but I can’t make it filter only outdated images
mainly those that were pushed to registry after I pulled

I think I get to the file in docker vscode ext that provide this functionality here

but I’m not sure How to implement it with only docker commands
seems i’s comparing imageID calling it imageRefs but I can’t understand script

I am not sure, I understand your problem correct.

Docker build has a parameter to pull the base image before building the image.
And in my previous post I illustrated how to set the pull policy to always when creating a container.

yes you do
the only miss match between us is that I want to implement this functionality in the script I provided not the run level of the containers so I need to only list them with a command not to pull them directly
the script will pull then when I instruct it to do

maybe the best way is to stop at this point
so th pull command checks if it needs to pull the image or not

I have found a useful way to reduce network connection using this modified verion of the script

docker images --format '{{.Repository}}:{{.Tag}}' |
    xargs -I {} bash -c 'docker inspect --format="{{index .RepoDigests 0}}" {} && docker pull {};';
docker system prune -f;
docker system df;
docker images;
echo;
docker ps --size;

now when digest not exist indexing returns error and the “&&” sign prevents what after when error
only images that is from a registry has a digest so custom images wasn’t pulled
which in my case reduced pull reqests by more than half of original number

for telling what image needs update
I’m now sure that the extention I’m using is requesting that by 2 requests over network and that would just rais number of requests against what I want to achieve
thus I could let “pull” command just try to pull and if no need for pulling it wont do

I think no need for listing outdated containers anymore

How would this help you listing outdated images? repodigest will be missing when you built an image locally. Otherwise you will have a list of repodigests of already existing images and at the next step you attempt to pull the same image you already have since digest will always point to the same version of the image.

You could use hub-tool to check if there is a newer version of the image if you pulled the image from Docker Hub.

image_name="php:8.2-alpine"
platform="linux/arm64"
new_platform_digest="$(hub-tool tag inspect "$image_name" --format json --platform "$platform" | jq --raw-output '.Descriptor.digest')"
new_manifest_digest="$(hub-tool tag inspect "$image_name" | awk '/^Digest:/ {print $2}')"

Then compare the “$new_platform_digest” and “$new_manifest_digest” with the current digests of the image. “manifest digest” is the digest that you see when you pulled the image without specifying the platform. “platform digest” will be there when you pulled the image using the “platform” option. If none of these digests in the list of RepoDigests of your existing image, the image has a newer version.

Hub tool is part of Docker Desktop, but you can download it on Linux as well

It will not work with other registries. If you have images from any other registry than Docker Hub, you will need to use a registry API and I have no experience with it.

3 Likes

yes and that helped to just pull under ~16 images out of 67+ old custom images ( local ones )


Good custome solution for Docker hub Cli
could be added to what @meyay mentioned about fast solutions


For that I see the same implementaion you mentioned within VSCode docker extension

But the problem is that when using such a solution I will have at least 2 network trips pefore using pull

  • no. 1 → for auth
  • no. 2 → for checking digest
  • no. 3 → for pulling

while using pull command only uses one trip that will execute pulling if only needed

so I found the way I was thinking tring to use any external tool was not effecient in reducing resources
unless I go for an enterprise scale or a client that needs that


Mainly, I was able also to reduce the network bandwidth with just going with images that has digests to be pulled
then added a check to the script to make sure I had at least one image pulled then execute the rebuild script for rest of local images that reduced CPU usage some how when adding scripts to cron or hypervisor and that toke less time to excute

Thats Awosome
Thanks a lot