Same code base ( single file) for multiple containers ( production, staging , development )

Hello everyone.

I have an issue, i want to know how can i do that ?

I should have single source code and I should be running multiple containers on basis of that single source code.
e.g staging, development, production

I will be doing changes on my local machine and when I am done testing on my local machine, and when I am satisfied, i should choose on which environment i want to deploy

e.g I only want to deploy on staging server, not on production.

E.g If i make changes on my source code files , it should not affect my containers on production.

How can i achieve that with a single source code file for my multiple runnig containers on different environments.

1 Like

I am currently trying on my local machine.

I have a source file named “index.php”

I have 3 containers ( php:7.2-apache )

They are all working on the same ( index.php) source file which is physically located on my host machine.

I mounted volumes like this in my 3 containers:

./src:/var/www/html

I have no issue with ports and volumes.

My issue is, i want to make changes in my source file , but i don’t want the changes to be applied in 1 of the 3 containers.
Considering that the one of these 3 containers is on production.

How can i achieve that ?

Your approach is not stage aware. What you do might be acceptable for development, but is not as soon as you leave local development.

Instead of mapping host folders into containers, start creating images: copy a specific version of your code base into the image, build the image and tag it appropriatly. Then run your dev, stage, production based on the differently tagged images…

Thank for you reply,

Even if my codebase is 15 gb for exemple ? i will copy these 15gb into my image ( using dockerfile) ? It is a good practice ? My image will not be a little heavy ?

I am new on docker .

15gb seems like an awful lot?
I have no experience with images of that size. The size kind locks like there is a lot of potential for optimziation and seperation of concerns…

I got few questions.
What if code size 1gb. I mean its totally possible with public images and lots of public pages & stuff.
Will docker copy all the source code each time whether its a small textual change somewhere?

Even if you perform the COPY action at the bottom of your Dockfile, the whole layer will be changed.

I am comming from a Java Enterprise background, where compiled binaries of the code are shipped within the image, but user data/state is typicaly stored outside the application in a database. The application binaries usualy make up a few dozen megabytes.

For me images are a point of time copy of the sources or compiled binaries (and all its runtime dependencies and entrypoint scripts). Though, your situation seems way different and less clear.

1 gigabyte still sounds like a lot… Are you by any chance counting all the metadata,branches and tags into the sum? Typicaly you want to add the content of a single branch or tag, without any metadata.

Actually we are working with php, and we have so many resources and pages with complex script libraries.
Which includes animations, HD images & videos etc. So all that gathered around exceeds 1gb, so my concern here is how can I manage my project with docker here.

I want to run local, development, staging & production environments.
What is the best possible approach for this scenario ?

Php/Websites are unknown territory for me and the picture is still ambigous. I am afraid the best fit heavily depends on your architecture and use case.

Though, from an architectural perspective it seems odd to mix code and content. Usualy code or compiled binaries (the part that is identical for each customer) belong into the image, content (which is different per customer) usualy is held externaly (like in a db, fileshare, cdn or a simple webserver for static content).

As of now are we on the same page that docker will always push all of the code every time even if its 1Gb or more?
It means it doesn’t work like git, where we push only those files which has some changes.

While Git creates the change delta based on a diff per file, Docker images can only levarage the build cache, which re-uses existing parent image layers. If a change occours in a layer, this layer and all follow up layers are part of the change delta. You could put 999MB wort of static content in its own layer (having its own COPY instruction) and 1mb of changing content (having another COPY instruction), then just modify files in the 1mb changing content - as a result only the changed 1mb layer and all layers following it would be the change set.

Does that make sense?

Right it does make sense. To my understanding by your comment, docker only copy those files which have something changed.
If that’s the case then its fine.

Well, it depends. Lets asume you have a Dockerfile like this:

FROM {insert baseimage here}
ARG ...
ENV ...
RUN ...
ENTRYPOINT ...
COPY ./static /static
COPY ./dynamic /dynamic

If just a single file is changed in the ./dynamic folder, and you build a new image, all unchanged image layers will be re-used, but the image layer with the content of ./dynamic will be replaced with a new layer. If files change in ./static, then the image layer containing the static files will be replaced and the image layer with the content of dynamic as well. If you change the FROM line, then all layers would be re-created.

Every layer after the first change will be replaced. If you design your image to have the “frequently changing” part in the bottom of your Dockerfile, the “moving” parts would be reduced.