Help requested in building a humongous container

I’m building a rather humongous container to hold accessibility-related software for use on personal computers. It currently contains several dozen packages, plus their dependencies (and I expect it to keep growing).

I have a working set of scripts and a documented procedure for building the container, but I strongly suspect that things could be simplified. Interested parties are therefore invited to review my approach and offer comments, suggestions, etc. Here are some links:

-r

If you would share an url to a scm repo like e.g. Github or Gitlab, I might take a look.

I find it rather confusing to find a bunch of commands instead of the Dockerfile and all involved scripts.

Sorry, I meant to put in some links to the source code. Here you go…

Your approach is not realy docker-esq :wink: Seems like you need help to migrate a conventation bare-metal installation routine to the docker world.

Two generall hints/observations:

1.) logging:
Your current set of scripts decomposes the installation and uses a generic wrapping to handle the logging to files in the os. In the docker world, the recommended log target is STDOUT. If you use a CI system to build your images or Dockerhub’s automated builds, those will hold the logs of your build.

2.) Build procedure:
You do copy your files into the image, but instead of calling the consequtive commands inside the Dockerfile, you start a container from your “install image” and then run your commands manualy. You migh want to move the commands from “add-ons” into one (!) Dockerfile and make it call your ruby dependency wrapper. Make sure to perform an apt-get update prior your first use of apt-get install, make sure to install packages with --no-install-recommends to not unnecessarily bloat your image . Each RUN declaration will create a layer, so make sure that cleanup (rm a file/folder; remove apt-cache) or chmod/chown operations do happen on the very same layer the file was initialy created - otherwise you might end up in having unused files in deeper layers that will be just overriden or hiden in higher layers. Hint: under the hood, each layer is stored in a seperate tar. The image is composed of metadata that links all used layer tars together. see: https://blog.docker.com/2019/07/intro-guide-to-dockerfile-best-practices/

Once you refactor your Dockerfile and scripts, I can have another look.

I assume you are aware that a docker container is not a vm and that a docker image is supposed to package a single application, its dependencies and a set of entrypoint scripts to configure the application on container start.

After a closer look at your toml file, the use cases are unclear, as your toml-file clearly indicates that you try to package the behavior of a full os into a docker image. Are you sure that a VM (for instance with Vagrant) is not a better fit for what your are planning to do?

That’s an interesting suggestion. Although Docker provides a lightweight environment, the container and host have to run the same kernel. Because most of my prospective users will be using macOS or Windows, they would be forced to run VirtualBox first. Vagrant looks tasty; I’ll look into using it.

They are forced to install the provider your Vagrant file is tailored for. Virtualbox is not realy a terrible choice: license permits commercial usage (though, the extension pack requires licensing for commercial usage!), supported on many plattforms, since VBox 6 the new graphic drivers allow a fluent experience.

Vagrant instructs VBox to perform the actions, there is no need to directly interact with VBox

Typical commands:
– vagrant up (create/start vm)
– vagrant halt (stop vm)
– vagrant destroy (delete vm)

If you need a folder to be mapped into the vm or a host port needs to mapped into the vm: you simply declare it in the Vagrant file.