Docker images don’t combine like that. An image only has one parent image, and there’s no provision to merge the filesystems of two images.
Your question is a little unclear to me, so I’ll suggest two paths that sound like what you’re saying:
You can build your own “base image” that has the upstream python:3 with the GIS libraries you need added in; then install your actual application on an image based on that.
python:3 --> my_python_gis --> my_flask_app
You could split your application into a “front end”, which has your existing Flask application, and a “back end”, which has the GIS libraries. You’d need to write a second application in the “back end” container (a second Flask app would be fine) and then connect to it from the “front end” container (e.g., using the requests library). If the set of calls into the “back end” application was pretty stable, you could iteratively develop the “front end” application and redeploy only that as it changed.
I agree with @dmaze although it’s fair to say there isn’t a right or wrong way to do this. Docker attracts a lot of commentary for solution designs that favour the decomposition of larger applications into micro-services and on the whole that can be a successful pattern. So in your example your would likely split the backend and front end functionality into separate containers, and you might even go further than that depending on other implementation details which you have not shared. For example you might have a caching layer, a messaging layer, some external APIs that get called and so on. All of those are candidates as micro-services. Of course the value proposition here is that it is (in some ways) easier to scale independent components, it creates a more loosely coupled but coherent design (so its easier to swap out individual parts) and managing the life-cycle of the micro-services also potentially gives rise to less breaking change, or at least reduces its blast radius. OTOH, highly distributed apps can be MUCH harder to reason about when things go wrong and can bring into play temporal dependencies, race conditions and other complex considerations. For that reason, some still prefer to use docker more as a packaging and distribution format for self contained apps (some might call this a monolith but that suggests that it’s wrong, and IMO it isn’t, it’s just a design choice). Your idea of ‘combining’ the functionality would likely turn into a Dockerfile that implements a multi-stage build.