Configuration of server with CentOS and with Docker containers

Good morning,
I’m Marco and I’m a student in an Italian university.
In my lab, there is a server with dual Xeon and dual NVIDIA GPU that I have to configure as Deep Learning server.
I installed CentOS 7, I configured NVIDIA, Python3, cuDNN, TensorFlow, Keras, OpenCV 4.1.1 and all the python package for deep learning.
Now I’m trying to use docker to configure the environment like this:
RemoteDesktop -connect to-> docker on the server - connect to -> local docker to the server with python3, TensorFlow, Keras and Jupyter notebook.

The most important thing is that the server needs to create a queue where a user inserts the work and when the server finishes preceding works starting the last one inserted.

The user can use Jupyter notebook to insert data on the HardDisk (there are 6 hard disks on the server) and the Jupyter notebook can use only one GPU a time.

I created the image with all the python package necessary for the deep learning but every time I start this image I need to reinstall all.

My question is: there is the possibility to configure these dockers:
1- docker that permit remote access from a remote desktop with docker or anything else
2- docker that manage the working load on the GPU
3- docker that not lose configuration of python package for deep learning any time I restart it
4- docker with jupyter that user docker 3

I think the right configuration is:
remoteDesktop connect to docker that manage the network connection this docker is connected locally on the server to the Jupyter docker that is connected to docker with python package for deep learning and this docker ( I think that is possible to run two of this, one for every GPU) is managed by the load manager docker.

Can anyone help me please to reach this configuration, or if you have a suggestion for better configuration or a tutorial or guide to create a better configuration?

Thanks a lot to everyone.