Where do you keep your source code?

I’m working on setting up an application using nginx, nodejs, and mongodb. I’ve got a docker-compose.yml file working to deploy this, but I’ve run into a slight design problem.

My original idea was to have 5 containers: nginx, node, mongodb, src, and mongodb-data. The src container would be the code tree with a volume shared between nginx and node as it would have both html files for the front end and js files for the backend.

When there was an update to the code, I was going to rebuild the src image and restart anything necessary. However, as you can image, since the code directory is a volume (so it can be shared) the new image (with updated source code) doesn’t update the volume. I can’t just delete all the volumes as I don’t want to delete the database data volume.

So my question is, how do people normally handle this?

For this particular project it would be easy to put nodejs code in the node image and anything needed by nginx in the nginx image. But, what about big projects that aren’t easily split apart?

Do you just put the code in the nginx image AND the node image? Do you use your CI tool to manually delete the src code volume? Is there a way to share files between containers that I haven’t found yet that doesn’t use volumes? I’d rather not have to mount a specific directory from the host machine. I want the source code to be in an image to make is easy to deploy in different servers.

I hope I explained this is some way that makes any sense.

Thank you for your responses.

1 Like

Did you find an answer elsewhere? I’m also interested in this.

Kevin,

Sorry for the late reply. I was on vacation.

I was not able to come up with a good solution for this so…

WARNING: Docker purists please look away!

I ended up running cron, nginx, and php-fpm all in the same container. I know it’s not the “docker way,” but it was the only realistic solution I could come up with.

I’ll have example files on github in the coming days.

Hello there! I too have come across this scenario and wish to share some of what I learned along the way. My philosophy in a docker environment is to minimize hard dependencies between containers and (especially) volumes.

It can get very complex when you decide to make a container per service or volume source code repository. For example, creating a container for nginx, php-fpm, and a shared volume with source code. This works fine for a small environment, but soon enough when you grow, you will find that when you attempt to introduce more servers to the mix on a orchestrated environment (like Swarm) that the hard dependency of a host volume does not scale well. You might use a storage driver would solve this, but that too comes with added complexity depending on how it works. The added layer of networking could slow your application down considerably.

I tend to prefer “containerizing” a service as a whole (minus persistence layer/caching/dynamic assets). I think of it as a black box that can accept and respond to requests, it would mean that my container has Nginx, PHP-FPM, and source code baked right in to the same container. This allows me to version the container with the version of my app, it is generally more standalone and easy to swap out (or scale up) if needed with a new or older version of the same app without worrying about another dependency being out of sync or started up in the correct order. Above all, the performance is also great because you no longer depend on the host filesystem or network to host your source code.

The only gotcha I’ve found using this method is that the container usually needs an entrypoint.sh script to bootstrap your application. This causes your spawned services to run as anything but PID 1, which can cause your container to remain in a zombie state even after trying to force it to stop. You can resolve this by using some additional bash to handle the signal appropriately with the trap command.