Docker Community Forums

Share and learn in the Docker community.

Dockerfiles run, but docker-compose won't. So they are building correctly, but I likely goofed on a config

So I have two questions and I’d be very grateful for anyone’s assistance.

First problem is why do my images work as desired, but for some reason they don’t work when I’m expecting the docker-compose file to run them. I believe it’s likely a tiny config issue, but on docker-compose any issue is a big issue and if someone could you help me out I’d appreciate it. Documentation and can be found immediately following the second question.

Second problem is more a question of being professional. How would you improve my Dockerfiles and compose file and if so, then why do you suggest what you suggest? I’m just trying to improve the quality and it’s not an easy thing. Your assistance is greatly appreciated.

Gist link for those who prefer it:
#1 Problem.
location: ./server
Item: Dockerfile1 (server)

FROM node:12.18.4-alpine as build

RUN apk --no-cache add --virtual native-deps \
  g++ gcc libgcc libstdc++ linux-headers autoconf automake make nasm python git && \
  npm install --quiet node-gyp -g

RUN mkdir -p /src/app/node_modules && chown -R node:node /src/app
WORKDIR /src/app

COPY package*.json ./
RUN npm install

COPY . ./

FROM node:12.18.4-alpine
COPY --from=build . ./


CMD [ "npm", "run", "dev" ]

location: ./client
Item: Dockerfile2 (client)

FROM node:12.18.4-alpine

RUN mkdir -p /src/app/node_modules && chown -R node:node /src/app
WORKDIR /src/app

COPY package*.json ./
RUN npm install

COPY . ./


CMD [ "npm", "start" ]

location: ./ (root)
Item: docker-compose

version: '3.8'

    image: server
    container_name: server
      context: ./server
        - NODE_ENV=development
      - "5000:5000"
      - .:/src/app
      - /src/app/node_modules
    image: client
    container_name: client
      context: ./client
        - NODE_ENV=development
      - "3000:3000"
      - .:/src/app
      - /src/app/node_modules

Define the application dependencies.

Create a directory for the project:

mkdir composetest cd composetest
Create a file called in your project directory and paste this in:

import time

import redis
from flask import Flask

app = Flask(name)
cache = redis.Redis(host=‘redis’, port=6379)

def get_hit_count():
retries = 5
while True:
return cache.incr(‘hits’)
except redis.exceptions.ConnectionError as exc:
if retries == 0:
raise exc
retries -= 1

def hello():
count = get_hit_count()
return ‘Hello World! I have been seen {} times.\n’.format(count)
In this example, redis is the hostname of the redis container on the application’s network. We use the default port for Redis, 6379.

Handling transient errors

Note the way the get_hit_count function is written. This basic retry loop lets us attempt our request multiple times if the redis service is not available. This is useful at startup while the application comes online, but also makes our application more resilient if the Redis service needs to be restarted anytime during the app’s lifetime. In a cluster, this also helps handling momentary connection drops between nodes.

Create another file called requirements.txt in your project directory and paste this in:

Step 2: Create a Dockerfile
In this step, you write a Dockerfile that builds a Docker image. The image contains all the dependencies the Python application requires, including Python itself.

In your project directory, create a file named Dockerfile and paste the following:

FROM python:3.7-alpine
RUN apk add --no-cache gcc musl-dev linux-headers
COPY requirements.txt requirements.txt
RUN pip install -r requirements.txt
COPY . .
CMD [“flask”, “run”]
This tells Docker to:

Build an image starting with the Python 3.7 image.
Set the working directory to /code.
Set environment variables used by the flask command.
Install gcc and other dependencies
Copy requirements.txt and install the Python dependencies.
Add metadata to the image to describe that the container is listening on port 5000
Copy the current directory . in the project to the workdir . in the image.
Set the default command for the container to flask run.
For more information on how to write Dockerfiles, see the Docker user guide and the Dockerfile reference.

Step 3: Define services in a Compose file
Create a file called docker-compose.yml in your project directory and paste the following:

version: “3.8”
build: .
- “5000:5000”
image: “redis:alpine”
This Compose file defines two services: web and redis.

Web service
The web service uses an image that’s built from the Dockerfile in the current directory. It then binds the container and the host machine to the exposed port, 5000. This example service uses the default port for the Flask web server, 5000.

Redis service
The redis service uses a public Redis image pulled from the Docker Hub registry.

Step 4: Build and run your app with Compose
From your project directory, start up your application by running docker-compose up.

$ docker-compose up

Creating network “composetest_default” with the default driver
Creating composetest_web_1 …
Creating composetest_redis_1 …
Creating composetest_web_1
Creating composetest_redis_1 … done
Attaching to composetest_web_1, composetest_redis_1
web_1 | * Running on (Press CTRL+C to quit)
redis_1 | 1:C 17 Aug 22:11:10.480 # oO0OoO0OoO0Oo Redis is starting oO0OoO0OoO0Oo
redis_1 | 1:C 17 Aug 22:11:10.480 # Redis version=4.0.1, bits=64, commit=00000000, modified=0, pid=1, just started
redis_1 | 1:C 17 Aug 22:11:10.480 # Warning: no config file specified, using the default config. In order to specify a config file use redis-server /path/to/redis.conf
web_1 | * Restarting with stat
redis_1 | 1:M 17 Aug 22:11:10.483 * Running mode=standalone, port=6379.
redis_1 | 1:M 17 Aug 22:11:10.483 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.
web_1 | * Debugger is active!
redis_1 | 1:M 17 Aug 22:11:10.483 # Server initialized
redis_1 | 1:M 17 Aug 22:11:10.483 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command ‘echo never > /sys/kernel/mm/transparent_hugepage/enabled’ as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.
web_1 | * Debugger PIN: 330-787-903
redis_1 | 1:M 17 Aug 22:11:10.483 * Ready to accept connections
Compose pulls a Redis image, builds an image for your code, and starts the services you defined. In this case, the code is statically copied into the image at build time.

Enter http://localhost:5000/ in a browser to see the application running.

If you’re using Docker natively on Linux, Docker Desktop for Mac, or Docker Desktop for Windows, then the web app should now be listening on port 5000 on your Docker daemon host. Point your web browser to http://localhost:5000 to find the Hello World message. If this doesn’t resolve, you can also try

If you’re using Docker Machine on a Mac or Windows, use docker-machine ip MACHINE_VM to get the IP address of your Docker host. Then, open http://MACHINE_VM_IP:5000 in a browser.

You should see a message in your browser saying:

Hello World! I have been seen 1 times.
hello world in browser

Refresh the page.

The number should increment.

Hello World! I have been seen 2 times.
hello world in browser

Switch to another terminal window, and type docker image ls to list local images.

Listing images at this point should return redis and web.

$ docker image ls

composetest_web latest e2c21aa48cc1 4 minutes ago 93.8MB
python 3.4-alpine 84e6077c7ab6 7 days ago 82.5MB
redis alpine 9d8fa9aa0e5b 3 weeks ago 27.5MB
You can inspect images with docker inspect .

Stop the application, either by running docker-compose down from within your project directory in the second terminal, or by hitting CTRL+C in the original terminal where you started the app.

Lewish, what does your post have to do with my query?

Ignoe lewish95: its a bot that harass and confuses users with usualy unrelated post.

Regarding your first question: are you aware of the differences between volumes and bind-mounts? While volumes have a copy mechanism that copies existing data from a container folder back to the volume, bind-mounts mount a host folder on top of a container folder, thus making its original content invisible.

The .:/src/app volume mapping is actualy a bind-mount.
I am surprised that /src/app/nodes_modules is even a valid mount option. You might want to use a named volume here and do a proper volume mapping; do not forget to declare the volume in your compose.yml.

Regarding your second question: you might want to introduce separation of concerns and perform the image builds within a ci/cd pipeline using docker build ... to explicitly build tagged images, which are pushed to a (container image registry. Then use the docker-compose file for pure deployement configuration purposes and address the previously build images there. Be aware that docker-compose will not pull new images for mutable tag once an image for that particular tag is present in the local image cache - you will need to explicitly pull to use the most recent images. Docker swarm on the other hand always pulls the most recent image during container deployment.

Dockerfile1: since this is a mutlistage build, the instuctions for the “main” stage look about right. ARG, ENV and EXPOSE are cheap directives, thus they can remain at the bottom, even though they are less likely to change. I won’t discuss the build stage as it has no effect on the final image. What is the context of the main stage’s COPY --from=build . ./ operation? What is ‘.’ from the perspective of the main image? Does it realy just copy the files you intend to copy and realy need? Also the CMD includes a environment specific arguement… wouldn’t it be better to introdue an environment variable and leverage it in your app (or entrypoint script?) to determin the environment?

Dockerfile2: same questions from Dockerfile1, except the CMD, which looks fine here - even though no ENV is desclared.

Dockerfile1 & Dockerfile2: I know it’s a matter of taste, but don’t you agree by not adding a VOLUME instruction, the information which path(s) inside the container are ment to be used with volumes is lost? If a VOLUME instruction is present and no volume is mounted, docker will create an anonymous volume for this folder(s).

Generally: you might want add image labels according You could use the to transport details about your image like the version of the embedded application, the commit id, build date and whatever you find reasonable.

Does that make sense?

Hi Metin,

Thank you so much for taking the time to assist me. I really appreciated it. I have solved it; the link below is also the same link above, but I’ve re-included here for your convenience and perhaps you’ll have time to give feedback.

I’m still reading and re-reading your feedback!! I am not sure I’ve integrated any of it into my work, but I’ll try. It’s just hard for me to follow easily because I’m still so new; I’ll get there. :slight_smile:


All the best this weekend. If you are in the snow like us, then drive safe.


It’s not much fun to provide feedback on changing configurations.
How about you incorpate everything and keep on optimizing until you hit an impediment :slight_smile:

Also: the way you build the image does not levarage the build cache optimaly. Changing parts should be in the bottom. Also loose your ARG and ENV for the port and use a static port for EXPOSE. The EXPOSE instruction does nothing byitself - it is purely for documentation purpose (and container links, which you don’t use). Whenever an ARG is used it will lead to a cache miss in your build cache, and new image layers will be created for this instruction and all consequetive instructions.