I’ve a question on integrating custom code in a docker container.
We release our software product as a docker image. Our application running in the container needs to integrate at run time with our client’s code for integration with client’s internal system. These range from authentication to data exchange. We want to achieve the following
- Keep our docker image intact - since this has commercial and product support implications
2)Allow flexibility in our product to integrate with client’s code at run time.
what is the best way to achieve this? Should we use Docker COPY command in docker compose to bring the custom code? Is there any other alternatives? Please share your experience/thoughts.
I have exactly the same problem: we have a container with our “standard platform” and we need to enrich this container with customized software that differs from client to client.
I’m afraid I don’t have a solution for you… and for me!
In this moment, I have in mind three possible solutions:
- inject our custom code in the container using a volume. Our platform use PHP Symfony so this is technically possible but it’s a very poor solution. difficult to mantain and easy to break during platform updates
- use a “package”. in PHP, Composer permits to create and deliver software packages. It’s similar to Maven, if you use Java. In this case, I imagine that the “standard container” during boot has to execute “composer require customPackage”, in order to load a package that is different for every deployments. Unfortunately, this means that booting operation become longer.
- use another container, we can call it “custom container”. in a “microservices manner” we can think about it as another service that expose API and communicate with “standard container” via API, too.
I prefer third solution, but to make it work i have not only to develop “custom container” but also to modify some aspect of “standard container”. So, not so easy.
These are my thoughts, hope can help you!
Isn’t the solution to this building new images? (I like the proposed option number 3 btw)
You currently have an image, let’s call it
gold_image:latest. For each client you want to build a new image.
COPY custom_code.py /custom_code_dir/
RUN python /custom_code_dir/custom_code.py
Then create the image using:
docker build -f /path/to/dockerfile -t golde_image/client:latest
This then means you can spin up a replica of your clients environment, quickly, at any time to replicate/test issues but you haven’t touched your gold image. When your gold image or your clients code are updated, you can rebuild this image and you have an up to date environment.
The content of my dockerfile example was just to help illustrate what I was trying to say; obviously I don’t know anything about your environment or any of the code. I would suggest that the clients code is stored in a versioned system like git and that your dockerfile retrieves the latest version from there.
If you’re going to be creating many images for each of your clients then I would also suggest having your own container registry
your proposal is a smart solution fo sure. But I think there are some pros and cons:
- When I build client image (assuming i have a good CI/CD pipeline), I’m sure that my images are well-formed and ready for production
- You can access all functions and libraries of golden_image, and it can help build the client code part.
- every time I build a new gold_image, i have to rebuild all client image, and maybe version them. Imagine you have 100 clients, it can be complicated.
- Your client code will be coupled with your standard code. So you don’t fully follow “loose coupling” principle. but maybe this depends on code and we didn’t mention specific code or architecture.
In my scenario, the first “cons” issue is very relevant: I have to deploy 100 clients on-premise, so this is not very simple.
This is the main reason why (in my case) i think i can’t follow this solution.
On the other hand, my third solution (that you tell you like), it’s more flexible for me, but i have one big question: how much my code is “loosely coupled”? Maybe i can find that some piece of my code is not so easy to separate from my “golden_image”.
At the moment I’m not fully able to answer this question.
Thanks for the response @pilade.
You mentioned a CI/DC pipeline; do you already have one? For me, I would have my CI/CD application trigger the image rebuild on detection of a new gold_image being published. I would try and keep my versioning tied to the versioning of the gold_image.
I’m not sure these images have any bearing on whether the application is loosely or tightly coupled. It’s just a consistent test environment. My understanding is that the coupling is referring to whether or not changes in the code of one application impact the other application. Whether you deploy your gold image and manually download the client code, or automate the process, does not affect the code architecture.
I think this is probably a case of preference and the old adage “If the only tool you have is a hammer, treat every problem like a nail”. Are you a developer per chance? It sounds like the tools with which you are most familiar include PHP Symfony and that your go to ‘tool’ is to write a custom application. I come from a SysAdmin background, so my go to is more building the environment using existing tools.
I’m not saying either approach is right or wrong, when it comes to something you have to maintain going forwards, something with which you are familiar and can easily troubleshoot and maintain is the way forwards.
Yes, i have a CI/CD pipeline, and yes, i’m a developer…
The last thing you said is the most relevant: the goal has to be to build an easily mantainable and deployable system.
For this, we have to look at specific application scenario. It’s not possible to make a choice in an absolute sense.
So I have only to look at my scenario, choose between your solution and my third solution and implement it all!