Shared library usage

Hello Team,

I am exploring options for utilizing shared libraries in a more efficient way. My goal is to avoid duplicating libraries across multiple containers. Instead, I want to place them in a common location and have all containers consume them from there. Additionally, if there’s an update to a library version, I would prefer not to update each individual container but rather update the library in the shared location.

For example, instead of having libxyz.so included separately in each container, I want to store it in a shared volume or a central repository. All containers can then reference this single library, so if a new version of libxyz.so is released, I would only need to update it in the central location, rather than updating every container.

I understand that including all libraries in the same Dockerfile is the most common approach. However, I would like to explore alternative methods. Below, I have listed a few methods I found. Some of these approaches may require restarting or rebuilding the application containers. Could you please review my analysis and let me know if I have made any errors?

  1. Use a base image that includes the library
    Create a Base Image with Libraries: Start by building a Docker image that includes all the common libraries or dependencies needed by your applications.
    Use the Base Image in Application Images: Use the FROM instruction in your application Dockerfile to inherit the base image.
    Rebuild Dependent Images: When the base image is updated (e.g., library versions change), rebuild all dependent images to incorporate the updates.

  2. Use Volume Mounts for Dynamic Library Updates
    Store libraries in a shared host directory and mount it as a volume in the container.
    Update the library files in the host directory.
    Restart the container to reflect the changes dynamically.

  3. Shared Library Container
    Create a dedicated container to host shared libraries.
    Application containers communicate with the library container or access the libraries over a shared volume.
    Use Docker volumes or network-mounted filesystems (like NFS) for sharing the libraries.

Can someone let me know, whether my understanding is correct, and what are the practical challenges with each of these?

I almost stopped reading here. It goes against the concept of containers. Volumes can be used for data, but don’t use it for libraries. You can easily break all your containers by making a mistake with the libraries. Each container should be independent from the others and independently updated.

So the right option is the first

assuming that all of your containers can be based on the same distribution or distroless base image.

1 Like

Thank you.
What about option 2?

I started my post with

Maybe option 3 if you meant using a container as an API server and all operations are handled in that container while other containers send requests to the API container, but don’t share libraries on volumes.

1 Like

I mentioned Option 2 by mistake. I was trying to refer 3.

I may need to read further on what you have just mention about API server.

This topic was automatically closed 10 days after the last reply. New replies are no longer allowed.