Docker dealing with images instead of Dockerfiles

Can someone explain to me why the normal Docker process is to build an image from a Dockerfile and then upload it to a repository, instead of just moving the Dockerfile to and from the repository?

Let’s say we have a development laptop and a test server with Docker.

If we build the image, that means uploading and downloading all of the packages inside the Dockerfile. Sometimes this can be very large (e.g. PyTorch > 500MB 9apps).

Instead of transporting the large imagefile to and from the server, doesn’t it make sense to, perhaps compile the image locally to verify it works, but mostly transport the small Dockerfile and build the image on the server?

The Docker Registry does not build images. It hosts them.

Now you can run automated builds using a Ci/CD product (like Jenkins/Cloudbees) which then pushes docker images to your Docker Registry.

Also when you build a docker image that you want to run on a specific operating system and architecture, you have to build it on a Docker node (engine) that matches the Operating System and architecture of the docker image being built.

1.An application that needs to run on Linux amd64, That docker image must be built on a Docker node that is running on a Linux amd64 machine.

2.An application that needs to run on Linux Power. That docker image must be built on a Docker node that is running on a Linux Power machine.

3.An application that needs to run on Linux for Z (s390x). That docker image must be built on a Docker node that is running on a Linux for Z (s390x) machine.

4.An application that needs to run on Windows amd64. That docker image must be built on a Docker node that is running on a Windows amd64 machine.

5.etc. etc.