I’m trying to find a holistic solution for how to get our compilers and toolchains connected to docker containers efficiently.
- We have several Target platforms for software and Firmware. They range from a few hundered Mbs to 15GBs for Firmware stuff. Different services in our docker-compose.yml’s build different parts of each product. One docker-compose.yml per product which controls differences between dev and prod environments.
- Being able to push docker volumes to a registry or something that docker/docker-compose can automatically pull from if the volume is not on local host.
Of course we can’t push volumes though.
What won’t work:
- NFS mounts. They are slow, and we notice our builds are 50+% faster when the compilers/toolchains are on the local systems storage.
Pushing volumes to regsitry.
Solutions I am contemplating:
1.) Docker volume driver plugin: I have looked around to see if there are any that could help, but I haven’t found any yet. If they exist please tell me. s3 won’t work.
2.) Create a volume, bake it into our production build nodes (ami’s in our case). This however does not allow devs to locally build using this volume and the pipeline to do this gets very cumbersome and doesn’t allow for easy debugging or upkeep.
3.) Follow Eran Avidan’s idea and use an image instead of a volume. This still seems like the most likely but I don’t like it. I have structured my docker-compose.yml’s so devs can use there own compilers or supply a variable so compose uses the image with the compilers instead. docker-compose cannot do this switch without lots of editing and these images takes up cpu usage.
4.) baking them into the product’s main image that has the build logic in it. This would create huge images, and make it cumbersome to update the main image easily and restrict devs from being able to use their own compilers if they want.
Does anyone have any ideas or suggestions I could investigate?