Why does the node_modules folder requires a volume in order to be present in the container?

Hello all! I’ve recently spent some time learning Docker and have been attempting to write my own container images. I have a few questions that I would appreciate to get some thoughts on and learn more about.

Please forgive me if some aspects might be obvious, I’m really new to it but I am very keen to learn!

My interest in Docker containers at the moment lies more on the fact that when I work on a project with someone else, the development environment is the same and introducing new developers to a project can become slightly easier since they wouldn’t have to install any dependencies or run a number of commands they maybe do not understand what they’re for. After reading some of the documentation and watching some videos on how to build a Docker image I started trying to implement it in one of my projects which requires Node to work.

It does work fine (can be accessed here) but there are some aspects I do not fully understand. For instance, if I describe an npm install in a Dockerfile it runs it when building the image. However, does that mean that the node_modules folder is stored inside the image?

I assumed that it does since when the image is built the npm install command seems to run at that point, is this correct? The reason I’m not sure what really happens is that I ended up having to create a volume for node_modules in order to make use of the dependencies, however, I’m not quite sure why, the folder existed in the container, but it was empty.

I’m leaving my files below for anyone to give feedback, please feel free to access the project. I added these files to if you would like to see it in a more contextualized manner. Is there a better way to create an image for a project that needs node dependencies? Here’s the files:

Dockerfile:

# This image makes use of a Node image running on Linux Alpine
  FROM node:15-alpine3.13

  # A work directory is required to be used by npm install
  WORKDIR /var/projects/node-app

  # Copy all project files to the container
  # Files in the location of this file are copied to WORKDIR in the container
  COPY . .

  # Makes sure npm is up to date otherwise install dependencies attempts will fail
  RUN npm install -g npm

  # Install dependencies
  RUN npm install

  # The process this container should run
  CMD ["npm", "start"]

Initial docker-compose (made use of a .env for path but this aspect is not relevant I assume)

version: "3"
services:
  proton:
    build: ./ # The path to node-app Dockerfile
    volumes:
      # Allows changes made to project directory to be accessed by the container and persist 
      - ${PROJECT_PATH}:/var/projects/node-app

The final docker-compose file (this works but why is a Volume necessary for node_modules here?)

version: "3"
services:
  proton:
    build: ./ # The path to node-app Dockerfile
    volumes:
      # Allows changes made to project directory to be accessed by the container and persist 
      - .:/var/projects/node-app
      # A volume dedicated to node modules.
      - /var/projects/node-app/node_modules

Thank you so much for taking the time to read this! :pray:

UPDATE: Striked through content were the links to the project on Github. This was flagged as spam so I have remove them. sorry for the inconvenience.

1 Like

Your Dockerfile is under the protonmail-theme/docker folder move it to the same folder as your docker-compose file. NPM isn’t installing for you because the WORKDIR you are switching to doesn’t exist. Docker doesn’t create one for you and fails silently.

FROM node:15-alpine3.13
COPY . /var/projects/protonmail-themes 
WORKDIR /var/projects/protonmail-themes
RUN npm install -g npm && \
          npm install
 CMD ["npm", "start"]

Also you are copying all your project files including the docker and docker-compose files to the container which is wrong. You should only copy the files necessary to run the application to the container.

Hi David, thank you so much for taking the time to share your thoughts. I assume you had a look at the repository which has since been updated and now has a slightly different setup.

The context of the file is set in docker-compose.yml and it is currently working as expected, building the same way as if node.dockerfile was in the same folder of the docker-compose.yml file, like so:

version: "3"
services:
  proton:
    # Building a custom image described in a docker file.
    build:
      # Setting a context and dockerfile paths allows for Dockerfiles to be stored away in a sub-directory.
      context: . # Context of build, this is where the project files are stored.
      dockerfile: ./docker/node.dockerfile # The path to protonmail-themes Dockerfile and name of the dockerfile to be built
    # Setting an image name avoids the same image being built multiple times.
    image: csalmeida/protonmail-themes:latest
    # Specified the name of the container, commented out to avoid name conflicts when running multiple instances of the image.
    # container_name: protonmail_themes
    restart: always
    volumes:
      # Allows changes made to project directory to be accessed by the container and persist.
      - .:/var/projects/protonmail-themes 
      # Adds a volume to store node dependencies.
      - /var/projects/protonmail-themes/node_modules

One way to know whether this is working is that a lack of a package.json file existing in the container would mean npm install wouldn’t have an entrypoint to run at and install dependencies from. Meaning that when the container runs, the context is being changed to the root of the project despite the Dockerfile being stored in a sub-directory. Essentially it seems that there’s no issue with this aspect of the configuration.

However, I did give your suggestion of moving the COPY instruction before the WORKDIR instruction a try, just to confirm if I get what you said. You are saying that the order of these statements would result in /var/projects/protonmail-themes being created before the working directory is set, thus preventing the error of the node_modules folder being missing?

Not sure if I understand completely, but I believe that the WORKDIR directive can be used with COPY and if the directory specified does not exist it will create one. I’ve made the following changes to node.dockerfile, per your suggestion (please keep in mind that the node_modules volume was also removed since it should no longer require it I assume):

# This image makes use of a Node image running on Linux Alpine
FROM node:15-alpine3.13

# Copy all project files to the container
# Files in the location of this file are copied to WORKDIR in the container
COPY . /var/projects/protonmail-themes

# A work directory is required to be used by npm install
WORKDIR /var/projects/protonmail-themes

# Makes sure npm is up to date otherwise install dependencies attempts will fail
# Install dependencies
RUN npm install -g npm && \
    npm install

# The process this container should run
CMD ["npm", "start"]

The container will build successfully but the process will fail. It results in node_modules being empty which in turn will keep npm start from running.

But what if the issue is, as you mentioned, the node.dockerfile not being in the same folder as docker-compose.yml? I gave it another go by moving it to the root of the project (same directory as docker-compose.yml) and renamed it Dockerfile.

Contents of the Dockerfile:

# This image makes use of a Node image running on Linux Alpine
FROM node:15-alpine3.13

# Copy all project files to the container
# Files in the location of this file are copied to WORKDIR in the container
COPY . /var/projects/protonmail-themes

# A work directory is required to be used by npm install
WORKDIR /var/projects/protonmail-themes

# Makes sure npm is up to date otherwise install dependencies attempts will fail
# Install dependencies
RUN npm install -g npm && \
    npm install

# The process this container should run
CMD ["npm", "start"]

The docker-compose.yml, again with no volume for node_modules:

version: "3"
services:
  proton:
    # Building a custom image described in a docker file.
    build: ./
    # Setting an image name avoids the same image being built multiple times.
    image: csalmeida/protonmail-themes:latest
    # Specified the name of the container, commented out to avoid name conflicts when running multiple instances of the image.
    # container_name: protonmail_themes
    restart: always
    volumes:
      # Allows changes made to project directory to be accessed by the container and persist.
      - .:/var/projects/protonmail-themes 

This will build the container successfully as well but same result, empty node_modules which is not the desired result.

I might be misunderstanding your point but if changing the order of COPY and WORKDIR was meant to create the project folder and therefore add node_modules correctly this doesn’t seem to do the trick.

My questions still remain, when npm install is ran on an image build process does it add node_modules to the image? If so, how come does it end up being empty when a container is run, unless a volume is created for it?

I understand that I might be copying unnecessary files over to the container and I will add them to .dockerignore, thanks for pointing this out!

I guess I’ve never tried otherwise, but for me WORKDIR before COPY works just fine.

While trying to solve your original problem, why not also try to speed up the process? First, only copy package.json and run npm install. This way, as long as package.json does not change, future runs will use the cached layers, speeding up things a lot! Next, copy your own source code. It seems your own code does not need a build step; if it does then you may also want to use a temporary build image in which to get the full blown node_modules and do the build, and copy only the build results into a new and much smaller target image.

Code I use for Yarn, where it seems you do not have a build step so do not need a temporary build image either:

# Temporary (partially cached) build image
FROM node:lts-alpine as build
WORKDIR /app
COPY package.json ./
COPY yarn.lock ./
RUN yarn install

# To take advantage of caching until package.json or yarn.lock changes: only now copy
# all else into the build image, and build.
# See http://bitjudo.com/blog/2014/03/13/building-efficient-dockerfiles-node-dot-js/
COPY . .
RUN yarn build

# Final target image
FROM nginx:stable-alpine
COPY --from=build /app/dist /wherever/is/your/build/result

CMD ...

Though not about Proton, see see Building Efficient Dockerfiles - Node.js - bitJudo and Dockerize Vue.js App — Vue.js for examples.

Hi Arjan! Absolutely, the Dockerfile does work with no issues on that frot, however the node_modules folder would still be empty without a volume.

Thank for for the suggestions on speeding it up, will keep it in mind for when I next edit it. :pray:
Additionally, I wasn’t aware that it was possible to have multiple FROM states in a Dockerfile, this could be useful in the future.

The example files are really useful to learn more as well, appreciate it.

Sorry, should have made that explicit: not for me. The example above works fine without configuring any volumes.

How are you checking if there’s anything in node_modules? Something like the following may help (untested):

FROM node:lts-alpine as build
WORKDIR /app
COPY package.json ./
COPY npm.lock ./
RUN npm install
RUN pwd
RUN ls -la
RUN ls -la node-modules

Aside, from the documentation, emphasis mine:

WORKDIR

WORKDIR /path/to/workdir

The WORKDIR instruction sets the working directory for any RUN, CMD, ENTRYPOINT, COPY and ADD instructions that follow it in the Dockerfile. If the WORKDIR doesn’t exist, it will be created even if it’s not used in any subsequent Dockerfile instruction.

The WORKDIR instruction can be used multiple times in a Dockerfile. If a relative path is provided, it will be relative to the path of the previous WORKDIR instruction.

Oh well, reading your first post reveals a bit:

Yes.

Any chance another container attached to the same volume before the very container you’re showing above did? For that, How do named volumes work in docker? - Stack Overflow explains:

In case the volume is empty and both containers have data in the target directory the first container to be run will mount its data into the volume and the other container will see that data (and not its own).

See also Tips for using bind mounts or volumes in the documentation:

Tips for using bind mounts or volumes

If you use either bind mounts or volumes, keep the following in mind:

  • If you mount an empty volume into a directory in the container in which files or directories exist, these files or directories are propagated (copied) into the volume. Similarly, if you start a container and specify a volume which does not already exist, an empty volume is created for you. This is a good way to pre-populate data that another container needs.
  • If you mount a bind mount or non-empty volume into a directory in the container in which some files or directories exist, these files or directories are obscured by the mount, just as if you saved files into /mnt on a Linux host and then mounted a USB drive into /mnt . The contents of /mnt would be obscured by the contents of the USB drive until the USB drive were unmounted. The obscured files are not removed or altered, but are not accessible while the bind mount or volume is mounted.
1 Like

Thanks for taking the time to work this out with me Arjan! Since the thread is growing I would like to remind that the setup I have does work but I would like to know why it works the way it does.

Awesome, I hope to get to this point! I’ve built a docker image from your snippet and node_modules wasn’t a directory (logs further down). I wonder what I’m doing wrong here. For reference, in previous and current examples I’ve always removed all existing images and containers to make sure a cached version wouldn’t be used.

The snippet does make sense, the working directory is set to /app and package.json is copied to it. Dependencies install with no issues and given that /app is the current directory, a node_modules folder should be present at that time, this is the point where it fails.

Here’s the Dockerfile I used with your snippet (aside from one line, explained in the comment):

FROM node:lts-alpine as build
WORKDIR /app
COPY package.json ./
# Changed this line since I didn't have an npm.lock file
COPY package-lock.json ./
RUN npm install
RUN pwd
RUN ls -la
RUN ls -la node-modules

Prints the following logs:

 => [internal] load build definition from Dockerfile                                                               0.0s
 => => transferring dockerfile: 193B                                                                               0.0s
 => [internal] load .dockerignore                                                                                  0.0s
 => => transferring context: 35B                                                                                   0.0s
 => [internal] load metadata for docker.io/library/node:lts-alpine                                                10.9s
 => [internal] load build context                                                                                  0.0s
 => => transferring context: 118.99kB                                                                              0.0s
 => [1/8] FROM docker.io/library/node:lts-alpine@sha256:f07ead757c93bc5e9e79978075217851d45a5d8e5c48eaf823e7f12d  11.7s
 => => resolve docker.io/library/node:lts-alpine@sha256:f07ead757c93bc5e9e79978075217851d45a5d8e5c48eaf823e7f12d9  0.0s
 => => sha256:8e69714aa82bb6aa059e846100f81e90dc347bad154290bcab99a2a5be12d0f3 6.73kB / 6.73kB                     0.0s
 => => sha256:ddad3d7c1e96adf9153f8921a7c9790f880a390163df453be1566e9ef0d546e0 2.82MB / 2.82MB                     1.0s
 => => sha256:f845e0f7d73a90d440ccbf7aec29d17c9c70b837da76a1dc73a27819dcd5354e 36.12MB / 36.12MB                   9.0s
 => => sha256:47d471c4d8201b56237fe500a80a75a925d339279ce3c7964978ea47488c4948 2.24MB / 2.24MB                     0.9s
 => => sha256:f07ead757c93bc5e9e79978075217851d45a5d8e5c48eaf823e7f12d9bbc1d3c 1.43kB / 1.43kB                     0.0s
 => => sha256:b4cca2f95c701d632ffd39258f9ec9ee9fb13c8cc207f1da02eb990c98395ac1 1.16kB / 1.16kB                     0.0s
 => => sha256:1a88008f9c83ec74fe04a218b59c612d6ce3590244b7897889c013f335cf5b74 279B / 279B                         1.1s
 => => extracting sha256:ddad3d7c1e96adf9153f8921a7c9790f880a390163df453be1566e9ef0d546e0                          0.2s
 => => extracting sha256:f845e0f7d73a90d440ccbf7aec29d17c9c70b837da76a1dc73a27819dcd5354e                          2.0s
 => => extracting sha256:47d471c4d8201b56237fe500a80a75a925d339279ce3c7964978ea47488c4948                          0.1s
 => => extracting sha256:1a88008f9c83ec74fe04a218b59c612d6ce3590244b7897889c013f335cf5b74                          0.0s
 => [2/8] WORKDIR /app                                                                                             0.6s
 => [3/8] COPY package.json ./                                                                                     0.0s
 => [4/8] COPY package-lock.json ./                                                                                0.0s
 => [5/8] RUN npm install                                                                                          8.9s
 => [6/8] RUN pwd                                                                                                  0.5s
 => [7/8] RUN ls -la                                                                                               0.4s
 => ERROR [8/8] RUN ls -la node-modules                                                                            0.4s
------
 > [8/8] RUN ls -la node-modules:
12 0.418 ls: node-modules: No such file or directory

The WORKDIR docs make sense to me, but just to make sure I understand it, considering we have the following snippet:

WORKDIR /projects
COPY book.txt ./

This would result in book.txt being copied from the host machine directory where the Dockerfile is located to /projects in the container, is that right?

That’s cool! So that means that if I save the image the container should have the dependencies folder already, like they’re frozen in time, given I don’t build the image again. Is that right?

I don’t think this is the case since I have no other containers running, I remove them just to be sure there’s no interference when experimenting things.

This is interesting however, and it could be why node_modules is missing:

If you mount a bind mount or non-empty volume into a directory in the container in which some files or directories exist, these files or directories are obscured by the mount, just as if you saved files into /mnt on a Linux host and then mounted a USB drive into /mnt . The contents of /mnt would be obscured by the contents of the USB drive until the USB drive were unmounted. The obscured files are not removed or altered, but are not accessible while the bind mount or volume is mounted.

I think what might be happening is that node_modules does exist in the correct path but because a volume is mounted to allow changes to be made in project files between the host and the container, node_modules ends up being obscured? This is the line that does it in `docker-compose.yml:

volumes:
  # Allows changes made to project directory to be accessed by the container and persist.
  - .:/var/projects/protonmail-themes 

Removing that line and adding a CMD to list the contents of node_modules et voilá, the folder is there and with all its contents:

[+] Building 23.8s (10/10) FINISHED
 => [internal] load build definition from node.dockerfile                  0.0s
 => => transferring dockerfile: 575B                                       0.0s
 => [internal] load .dockerignore                                          0.0s
 => => transferring context: 35B                                           0.0s
 => [internal] load metadata for docker.io/library/node:15-alpine3.13      0.9s
 => [1/5] FROM docker.io/library/node:15-alpine3.13@sha256:25c968387d0819  0.0s
 => => resolve docker.io/library/node:15-alpine3.13@sha256:25c968387d0819  0.0s
 => [internal] load build context                                          0.0s
 => => transferring context: 29.87kB                                       0.0s
 => CACHED [2/5] WORKDIR /var/projects/protonmail-themes                   0.0s
 => [3/5] COPY . .                                                         0.2s
 => [4/5] RUN npm install -g npm                                           9.8s
 => [5/5] RUN npm install                                                 11.9s
 => exporting to image                                                     0.8s
 => => exporting layers                                                    0.8s
 => => writing image sha256:9baa44cbb7319f0100f807e522ed3292bff79d003ba82  0.0s
 => => naming to docker.io/csalmeida/protonmail-themes:latest              0.0s
[+] Running 1/1
 ⠿ Container protonmail-themes_proton_1  Created                           0.1s
Attaching to proton_1
proton_1  | .
proton_1  | ..
proton_1  | .bin
proton_1  | .package-lock.json
proton_1  | @mr-hope
proton_1  | @types
proton_1  | ansi-colors
proton_1  | ansi-gray
proton_1  | ansi-regex
proton_1  | ansi-styles
proton_1  | ansi-wrap
proton_1  | anymatch
proton_1  | append-buffer
proton_1  | archy
proton_1  | arr-diff
proton_1  | arr-filter
proton_1  | arr-flatten
proton_1  | arr-map
proton_1  | arr-union
# More folders...

If that is correct, it answers my question and helps a lot Arjan, super grateful!

Now this leads to an additional question. In the use case I presented I have n amount of project files I would like to change once the container is up and running, so I’ve setup a volume to do just that.

However, those files rely on Node dependencies, but node_modules is created at build time and then obscured when the volume that makes the project files available to the container is mounted. My question is my approach acceptable for my use case then? As in should I declare an additional volume to make node_modules is present?

- /var/projects/protonmail-themes/node_modules

Learning a lot so far, thank you, this is very insightful!

Very late reply, but I’ve stumbled upon the exact same issue where I was mounting the whole project folder into Docker image after it was built with one of the steps being npm install, overwriting node_modules in process.

Mounting additional volume for node_modules really helps making it present.
So, in my case:

      volumes:
        - ../audioclient:/app
        - /app/node_modules

first volume mounts the entire project, second “protects” node_modules from being overwritten by project.