Hi all,
I have a very large docker image ( 15+ GB) that contains all of our development tools, compilers etc.
I am using all of the basic best practices and optimizations to make the image as small as possible, but I am limited by the physical size of some of the dependencies.
There are some pieces that change often and some that don’t, so I would like to split the single container up into multiple and somehow have them able to access each other. So these are compiler tools, libs, python packages.
For instance, we have a static 3rd party Python package that is 1.5 GB. I would like to have that in its own container and the initiator could somehow just mount that containers storage and avoid the pip install time.
I realize that caching should help, but we have a lot of dynamic environments where caching doesn’t help, along with storage, speed, data transfer of managing a 15 GB image. I haven’t been able to find a solution out of the box, but am thinking that I might need to utilize docker compose and data volumes, but that is as far as I got.
Any suggestions would be appreciated.
-Jason