Your approach is not realy docker-esq Seems like you need help to migrate a conventation bare-metal installation routine to the docker world.
Two generall hints/observations:
Your current set of scripts decomposes the installation and uses a generic wrapping to handle the logging to files in the os. In the docker world, the recommended log target is STDOUT. If you use a CI system to build your images or Dockerhub’s automated builds, those will hold the logs of your build.
2.) Build procedure:
You do copy your files into the image, but instead of calling the consequtive commands inside the Dockerfile, you start a container from your “install image” and then run your commands manualy. You migh want to move the commands from “add-ons” into one (!) Dockerfile and make it call your ruby dependency wrapper. Make sure to perform an apt-get update prior your first use of apt-get install, make sure to install packages with
--no-install-recommends to not unnecessarily bloat your image . Each RUN declaration will create a layer, so make sure that cleanup (rm a file/folder; remove apt-cache) or chmod/chown operations do happen on the very same layer the file was initialy created - otherwise you might end up in having unused files in deeper layers that will be just overriden or hiden in higher layers. Hint: under the hood, each layer is stored in a seperate tar. The image is composed of metadata that links all used layer tars together. see: https://blog.docker.com/2019/07/intro-guide-to-dockerfile-best-practices/
Once you refactor your Dockerfile and scripts, I can have another look.