This command fetches updates for every single tag of every image even if it’s not outdated
that takes much time and hold my tiny dev server for some minutes ( it’s network is not good )
I need to replace
docker images
in that script with any other way to filter only outdated images
Not really. Though, you could use something like watchtower or diun to let you notify that there are new images for the repo tags you use.
Though, personally, I would just set the pull policy to always, so whenever you start a container, it pulls the most recent image before the container is created.
Docker build has a parameter to pull the base image before building the image.
And in my previous post I illustrated how to set the pull policy to always when creating a container.
yes you do
the only miss match between us is that I want to implement this functionality in the script I provided not the run level of the containers so I need to only list them with a command not to pull them directly
the script will pull then when I instruct it to do
now when digest not exist indexing returns error and the “&&” sign prevents what after when error
only images that is from a registry has a digest so custom images wasn’t pulled
which in my case reduced pull reqests by more than half of original number
for telling what image needs update
I’m now sure that the extention I’m using is requesting that by 2 requests over network and that would just rais number of requests against what I want to achieve
thus I could let “pull” command just try to pull and if no need for pulling it wont do
I think no need for listing outdated containers anymore
How would this help you listing outdated images? repodigest will be missing when you built an image locally. Otherwise you will have a list of repodigests of already existing images and at the next step you attempt to pull the same image you already have since digest will always point to the same version of the image.
You could use hub-tool to check if there is a newer version of the image if you pulled the image from Docker Hub.
Then compare the “$new_platform_digest” and “$new_manifest_digest” with the current digests of the image. “manifest digest” is the digest that you see when you pulled the image without specifying the platform. “platform digest” will be there when you pulled the image using the “platform” option. If none of these digests in the list of RepoDigests of your existing image, the image has a newer version.
Hub tool is part of Docker Desktop, but you can download it on Linux as well
It will not work with other registries. If you have images from any other registry than Docker Hub, you will need to use a registry API and I have no experience with it.
yes and that helped to just pull under ~16 images out of 67+ old custom images ( local ones )
Good custome solution for Docker hub Cli
could be added to what @meyay mentioned about fast solutions
For that I see the same implementaion you mentioned within VSCode docker extension
But the problem is that when using such a solution I will have at least 2 network trips pefore using pull
no. 1 → for auth
no. 2 → for checking digest
no. 3 → for pulling
while using pull command only uses one trip that will execute pulling if only needed
so I found the way I was thinking tring to use any external tool was not effecient in reducing resources
unless I go for an enterprise scale or a client that needs that
Mainly, I was able also to reduce the network bandwidth with just going with images that has digests to be pulled
then added a check to the script to make sure I had at least one image pulled then execute the rebuild script for rest of local images that reduced CPU usage some how when adding scripts to cron or hypervisor and that toke less time to excute