Container runs, but localhost is not accessible (despite port mapping)

Hello everyone,

while I am quite new to Docker, everything actually works perfectly on my Linux Ubuntu 22.04 machine. I used my docker compose file to build a Node.js image, and I also wrote a Dockerfile for it. The container runs perfectly on my Linux machine, so I pushed it to my Dockerhub Repo.

Now, I tried pulling that image on my Windows 11 machine. Well, that works so far - the image gets pulled and I can run a container with that image. Node even tells me that the development server for Next.js successfully started on localhost:3000, so everything works exactly like on my Linux machine. But when I access localhost:3000, my browser tells me that it’s “Unable to connect”.

Why exactly is that the case? Sorry if I missed out anything important, as aforementioned, I am still quite new to this! :slight_smile:

Hello!

Did you run your node image as a port forwarding command?

It would be helpful if you could tell me the command that executed the image.

+1 with @limsangwoons . Please post the command you’ve used. If docker run xxxxx make sure you’ve used -p 3000:3000 to expose port 3000 of the container to your local port 3000.

If you’re using docker compose, you need to do the same using the ports: entry in your yaml file.

1 Like

So this is what my Dockerfile looks like:

FROM node

WORKDIR /app

COPY package*.json ./

RUN npm install

COPY . .

EXPOSE 3000

CMD npm run dev

And here my compose.yaml:

version: '3.8'

services:
  frontend:
    build:
      context: .
      dockerfile: Dockerfile
    ports:
      - 3000:3000
    develop:
      watch:
        - path: ./package.json
          action: rebuild
        - path: ./next.config.js
          action: rebuild
        - path: ./package-lock.json
          action: rebuild
        - path: .
          target: /app
          action: sync

volumes:
  tasked:

For running the container on Windows, I just use docker run -it <imagename>

However: When I run the command with the -p 3000:3000 option, it works. But that’s odd, in my compose file I already specified the port mapping! Why can’t I run it without that option? It should automatically apply these settings, or I probably misunderstood something here…?
Or do I need to write that whole compose file again from scratch once I pulled that image to my new machine (which happens to be Windows here)?

Hello

You need to use “docker compose up --detach” here since you’ve a yaml file.

make sure that firewall allow the connection for incoming traffic on port 3000.

But I do not have the yaml file on my other machine when I pull the image. This yaml file was used for docker compose to create the image, I then put that image on docker hub, and now on my other machine, I basically pulled the image from there, and run a container with it. It only works with -p 3000:3000, but I want it to work automatically. How do I “transport” the yaml file to my new machine as well?

So it listens on localhost inside the container. That shouldn’t work anywhere, although I do remember that somewhere somehow accessing localhost on the host worked when the process listened on localhost inside the container. Later I couldn’t reproduce it. On Windows you most likely have Docker Desktop. ON Linux, you could have Docker Desktop or Docker CE, but the point is that localhost in the container is not localhost on the host. Port forwarding forwards ports to the container IP, but the process will not listen on that if it listens on localhost.

I actually don’t remember any case with nodejs when the connection issue was not caused by a wrong nodejs config. Apparently it is quite common.

By the way you are using the latest node. Who knows what it is on one machine and what it will be on another. Always use specific versions! Different nodejs could behave differently.

Oh I see, I think I just spotted my mistake: I thought that when you build an image through Docker Compose or Dockerfile, these configurations get shipped with the docker image. But that’s not true apparently, it’s basically just the image itself. So when I want these configurations to persist, I either need to write the Docker Compose file on my new machine again manually, or I need to use port mapping, right?

The Docker image is what you’re coding in Dockerfile. And you’re right, the yaml file isn’t shiped with the image. Everyone has to create his own on his disk.

But, if you just have one image (one service) in your yaml file, that one isn’t required in fact. Everything can be done on the command line using flags like f.i. -v to mount a volume.

Yaml files are mostly there (my use case) to configure multiple services like when you need an application like PHP and a web server and a database and …

If just one image, yaml isn’t required (but is well more readable).

1 Like

It would be unpleasent to pull an image from Docker Hub, start a container and see it mounted your system folder or published a port to the internet, because the person who built the image did that, wouldn’t it :slight_smile: An image has nothing to do with your environment. It is entirely under your control.

1 Like