More specifically, clone/pull the repo before you run docker build. Don’t try to run any Git commands from within your Dockerfile, and ideally don’t include a Git binary in your image. When there is a change, delete the old container and start a new one with a new image tag.
When you see posts on the forum around setting up a CI/CD system, that’s exactly about automating this workflow. Docker (among others) runs a service that will automatically rebuild your Docker images when you commit to a GitHub repository; the various cloud-based generic build systems tend to support Docker these days too; and Jenkins seems to be a popular choice here if you’re not into the all-cloud-all-the-time environment.
The workflow I’ve used at multiple employers works like this:
I do my work, and open a GitHub pull request.
An automated build system runs tests against my proposed PR, possibly building a Docker image along the way.
A coworker (or more) reviews and approves the PR.
I merge the PR into the master branch.
An automated build system builds a new Docker image, and updates a pre-production system with the new image.
After some manual testing and possibly other process steps, that exact same version of the image gets pushed into the production system.
I make mistakes in this sequence all the time. Always running the master code is scary, especially if you’re running in a very lightly typed scripting language where a simple typo won’t get caught until an unusual path tries to run that code and the application crashes. So this sequence has multiple points, including after code lands on master but before it’s actually in production, where it’s possible to notice an error and roll back.