Can someone explain to me why the normal Docker process is to build an image from a Dockerfile and then upload it to a repository, instead of just moving the Dockerfile to and from the repository?
Let’s say we have a development laptop and a test server with Docker.
If we build the image, that means uploading and downloading all of the packages inside the Dockerfile. Sometimes this can be very large (e.g. PyTorch > 500MB 9apps).
Instead of transporting the large imagefile to and from the server, doesn’t it make sense to, perhaps compile the image locally to verify it works, but mostly transport the small Dockerfile and build the image on the server?