Why exclude node_modules from Docker volume?

Why do most tutorials for running docker-compose for a Node.js container have you create an anonymous volume for your node_modules instead of just mounting all of them to your host directory? Is there anything wrong with having all your node_modules in a volume? Does it slow docker down? I haven’t experienced this. Note: this is for development purposes.

Docker Volumes abstract the real backend system that stores the data. It can be a local bind, nfs, cifs, or whatever Volume Plugin you installed.Especialy in a multi node environment making the data available on all nodes can become a chellange.

Another thing is the default policiy to copy existing data from a volume path inside the container (already present in the image) into the volume, so that the old and new data is visible, while a bind always replaces the whole folder, thus the old data becomes unaccessible.

Thank you for your response. However, I don’t think you answered my question. You explained what docker volumes are and why they don’t work well in production. Could you explain why in development it’s not common practice to include node_modules in the same volume as all other source code? I’ve been developing with node_modules in my volumes and everything has been fine so far.

If the image requires to preserve existing data in the target container folder, this would explain why they use named volumes. Moreover, using named volumes allows to illustrate an example that works out of the box for everyone, without having to explain anything about volumes or adjustments of the host side part of the mount.

Honestly, for a developmer machine, I neither like the named volume, nor mount bind approach to make source code available in the container. I would prefer building the image whenever I want to try changes - if the dockerfile is written cleverly, you can leverage the build cache to bring down build time. The next best option is to use a mount bind. In this use case named volume don’t make sense to me.


Thank you for your response! In the Docker Docs I found this under “Good use cases for bind mounts”:

Sharing source code or build artifacts between a development environment on the Docker host and a container. For instance, you may mount a Maven target/ directory into a container, and each time you build the Maven project on the Docker host, the container gets access to the rebuilt artifacts.

Here is the link: https://docs.docker.com/storage/

At the end, it boils down to the combination of taste and what works best for you. :slight_smile:

When it commes to maven:
I ended up rebuilding the image using the latest binaries form the projects target folder, as the outcome is always reliable and reproducable. If you use a bind and let the application server trigger a redeployment of a running service, every now and then you will spend time in analysing why the deployed application doesn’t work as expected.

Agreed! Thanks for your explanation.

1 Like

I’d also like to mention that after some experience, I think it’s faster to use a named volume for node_modules when you’re developing inside Linux containers on a windows machine. In my experience npm run serve (vue-cli-service serve) took 10x less time to start. Hope this helps!