If Docker Hub did the build ten years ago then Docker Hub would presumably still have the build. ie. you’d be using a ten year old build instead of a brand new build, which wouldn’t succeed, anyway, due to broken URLs.
I mean, most images are built off of pre-built images, I would imagine. If you’re building an image off of php:8.1 then that image is, itself, built off of debian:bullseye-slim:
But if you’re doing ./config, make, make install a bunch of times, that all takes time and I’d rather the unit tests that continuous integration runs complete as fast as possible instead of taking 30m because the Docker image is being rebuilt over and over and over again.
I mean, no rebuilding is needed if you’re using an image already on Docker Hub, but if you’re not - if you’re installing additional extensions and what not - then you’re gonna have to build those additional layers each and every time. I mean, docker does seem to cache layers when I build stuff locally but how does caching work on Travis CI? https://github.com/travis-ci/travis-ci/issues/5358#issuecomment-248915326 discusses this some but overall it seems waaaaay less ideal than just having the image on Docker Hub. Like what if a particular image isn’t used in CI very often? The cache is most likely not going to store that image into perpetuity. And what if you wanted to run it locally? Even if Travis CI did have the image cache’d that is of little benefit to local uses.