Bad Gateway - docker0 state DOWN

Ok I have a project that I’m currently running in 2 places as I learn how to use this system. The project is currently live on the internet and I’m taking it over. So I’ve got the project running on a desktop server and on Desktop for Windows as I go throughthe docker learning curves.

I’ve never used docker before a month ago. After 2 weeks of run around and trying to get a former employee to remember how the project works and all of the dependencies and then finding out that the code left behind was in a non-working state I have finally gotten the project to build and all of the associated images in the docker compose to run without exiting/erroring out.

A brief synopsis, the project is a Typescript website compiled using npm and therefore JavaScript in the end and using node.js. There is a front end and a backend project that are included. I then have nginx as my server and mongdb for storage/data and redis all running. When all is said and done I have 4 containers running: proxy(nginx), redis(redis:alpine), db(mongo), web(js).

As I said I have it running on an Ubuntu 22.04 server, and Docker Desktop for Windows. In both cases I’m getting 502 bad gateway. If I look at the logs of proxy(nginx) I see the following when trying to connect:

2023/08/01 20:10:37 [error] 7#7: *10 connect() failed (111: Connection refused) while connecting to upstream, client: 172.22.0.1, server: 192.168.67.63, request: "GET /favicon.ico HTTP/1.1", upstream: "http://172.22.0.4:3005/favicon.ico", host: "192.168.67.63", referrer: "https://192.168.67.63/"

In addition to that my docker0 interface says it is DOWN even when my docker containers/images are running:

1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host
       valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP group default qlen 1000
    link/ether 00:15:5d:f5:98:4d brd ff:ff:ff:ff:ff:ff
    inet 192.168.67.63/20 brd 192.168.79.255 scope global eth0
       valid_lft forever preferred_lft forever
    inet6 fe80::215:5dff:fef5:984d/64 scope link
       valid_lft forever preferred_lft forever
3: docker0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default
    link/ether 02:42:4d:1f:e4:97 brd ff:ff:ff:ff:ff:ff
    inet 172.17.0.1/16 brd 172.17.255.255 scope global docker0
       valid_lft forever preferred_lft forever

Based on my understanding of ther 502 error message from nginx it seems the link between eth0 and docker0 is broken or possibly because docker0 is “DOWN” then no traffic is passing between the two.

Is this a fair interpretation? If so, how can I fix this? If not, what other ideas are out there?

Thanks in advance.

Hi!

Sorry, but I don’t know how to solve your problem! I’m just surprised at how similar my problem is to your’s. Not just the error message, but also the numerous containers, the developer being out of the picture, and how the source code is in a non-working state.

I’m not sure, but perhaps the discussion about my problem over here might be of use to you? It isn’t the same problem, but you seem to know more about web development than I do – maybe you can sift through this and find something of use?

Good luck to both of us! I’m going to get some rest before I try to think about it again…

Ray

I had the same issue when using docker in Ubuntu and docker desktop in win10 simultaneously

Uninstall all, restart pc and install only docker in Ubuntu or wsl only
Docker desktop override or setup a vlan with same mac address which cause error

That’s good to know. This could be the issue with my Windows, however I have a standalone Dell server which is producing the exact same issue as Docker Desktop.

This was actually very helpful. After posting this, I found this on another website here which was very similar to the issue you were having. I made a change to my proxy_pass which was configured as:

proxy_pass web:3005;

“web” is the name of the container for the app. I changed the config to:

proxy_pass http://192.168.67.63:3005;

and now I’m still getting 502 Bad Gateway but for a different reason than when I opened this thread.

2023/08/02 14:50:24 [error] 7#7: *4 upstream prematurely closed connection while reading response header from upstream, client: 172.23.0.1, server: 192.168.67.63, request: "GET / HTTP/1.1", upstream: "http://192.168.67.63:3005/", host: "192.168.67.63"
172.23.0.1 - - [02/Aug/2023:14:50:24 +0000] "GET / HTTP/1.1" 502 559 "-" "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/115.0.0.0 Safari/537.36" "-"

This still seems very related to yours. I will also state that the app docker compose I was given does create a Network called “appnet”. That was mentioned in your thread so I figured I would mention it. I do have one question about the network. docker compose says its created but I don’t see it running in containers. Is there a way to make sure that the network is running or possibly check its logs too?

Docker compose gives me:

[+] Running 5/5
 ✔ Network appnet  Created                                                                                       0.1s
 ✔ Container redis   Started                                                                                       1.1s
 ✔ Container web     Started                                                                                       1.7s
 ✔ Container db      Started                                                                                       1.4s
 ✔ Container proxy   Started                                                                                       1.8s

and docker ps gives me:

CONTAINER ID   IMAGE          COMMAND                  CREATED              STATUS          PORTS                                      NAMES
9b9315d1743a   nginx          "nginx -g 'daemon of…"   About a minute ago   Up 58 seconds   0.0.0.0:80->80/tcp, 0.0.0.0:443->443/tcp   proxy
ce6b27ade16a   ase:drive      "node /var/www/drive…"   About a minute ago   Up 58 seconds   0.0.0.0:3005->3005/tcp                     web
bc330b5dcdba   mongo          "docker-entrypoint.s…"   About a minute ago   Up 59 seconds   27017/tcp                                  db
8b09e662f1e9   redis:alpine   "docker-entrypoint.s…"   About a minute ago   Up 59 seconds   6379/tcp                                   redis

So I don’t see the Network after its created.

I’m going to look more into the network setting since this does seem to be an internal network issue and the containers have a hard time finding each other or passing data to each other.

Thanks for the link and the insights from your thread. Hopefully we can figure this out together.

Update:

I did find that docker network will let me inspect the “appnet” network created by docker compose
So the results of my `docker network inspect appnet" are as follows:

[
    {
        "Name": "appnet",
        "Id": "a3367a7c5e4a842405c8c3773afaf7b8331ec86c33bda1d9bd507091587793e2",
        "Created": "2023-08-02T15:02:41.977782857Z",
        "Scope": "local",
        "Driver": "bridge",
        "EnableIPv6": false,
        "IPAM": {
            "Driver": "default",
            "Options": null,
            "Config": [
                {
                    "Subnet": "172.18.0.0/16",
                    "Gateway": "172.18.0.1"
                }
            ]
        },
        "Internal": false,
        "Attachable": false,
        "Ingress": false,
        "ConfigFrom": {
            "Network": ""
        },
        "ConfigOnly": false,
        "Containers": {
            "8b09e662f1e9404514542051e6a19f58ca09d5e207512c814e88c430f10c915e": {
                "Name": "redis",
                "EndpointID": "6b2bd556a962368976c4f567ec2d330e4eaae85826fc1607d6275f6122c57fd8",
                "MacAddress": "02:42:ac:12:00:02",
                "IPv4Address": "172.18.0.2/16",
                "IPv6Address": ""
            },
            "9b9315d1743ae3bef496463945ef5023d264feb1f9a78726751c7a449ff3fe85": {
                "Name": "proxy",
                "EndpointID": "f89d106203bc900e34450f9a208135c65b7806486fa8f69dccf68340f493c5bc",
                "MacAddress": "02:42:ac:12:00:04",
                "IPv4Address": "172.18.0.4/16",
                "IPv6Address": ""
            },
            "bc330b5dcdba04acf39cc61ba3df82aacec7ca15b6fa0fe6bb17f3d2dbbd3c36": {
                "Name": "db",
                "EndpointID": "2199ee7e1b55167311759ed1fa36d43f4492fc925a49b372488f199699c293b6",
                "MacAddress": "02:42:ac:12:00:03",
                "IPv4Address": "172.18.0.3/16",
                "IPv6Address": ""
            },
            "ce6b27ade16a71b4f5065ccc51b9c9aef1e697c0bfc8e422b93fb1cdffa05a84": {
                "Name": "web",
                "EndpointID": "5444cbbade531c9083bfc6a24ea4fd40fe3aca929bdc47c941d6f1c963f9f986",
                "MacAddress": "02:42:ac:12:00:05",
                "IPv4Address": "172.18.0.5/16",
                "IPv6Address": ""
            }
        },
        "Options": {},
        "Labels": {
            "com.docker.compose.network": "appnet",
            "com.docker.compose.project": "main",
            "com.docker.compose.version": "2.19.1"
        }
    }
]

I can see that all my containers are listed on the docker0 subnet but I don’t see anything about the local eth0 subnet being bridged to docker0. Also, If this is the case and “appnet” is running on docker0 then why is the state of docker0 “DOWN” when I do an ip a command? I have tried to force docker0 to be “UP” but it stays down.

Just for additional information this is the output of docker network ls while my app is running:

NETWORK ID     NAME       DRIVER    SCOPE
0563841cff08   bridge     bridge    local
a3367a7c5e4a   appnet   bridge    local
fb6be38a9653   host       host      local
ab1edea0a3c5   none       null      local

Ok I believe I’ve made somewhat of a discovery that could explain what I’m seeing. I have 2 elements defined in my docker-compose.yml like this:

    entrypoint: ["node"]
    command: ["/var/www/drive/node_modules/.bin/nodemon", "app.js"]

That is how it came from the git repo that I was given. However, looking at the running config it looks nothing like what I have:

    entrypoint: ["/usr/local/bin/npx", "nodemon", "./bin/server/app.js" ]

Based on my research of these elements I understand that they are probably the root of my problem because as you can see none of those files exist on my installation.

ls /usr/local/bin/npx
ls: cannot access '/usr/local/bin/npx': No such file or directory

ls /var/www/
html

In addition, “node” being the entry point seems a little lacking in parameters. And the command to just run “app.js” seems generic since there could be any number of files named “app.js” and the “app.js” for my project is buried in ${root folder}/web/bin/server/

I’m convinced these have something to do with my issues and I’ve tried editing the elements to point to the directories that make more sense for where I have things installed but I’m still not able to fix it.

Can anyone give me some pointers on what a entrypoint and command would look like for an app running with node?

docker0 is the default Docker bridge and used only when you run a container using docker run and don’t override the network to use a user-defined network` If no containers are using the default network, docker0 will be down. That’s normal.

Where is the “Bad gateway” error? I only see “prematurely closed connection”

A network is not a container. Why would you expect it ti be in the list of containers? But you already found the right command, so that’s solved.

That is clearly not docker0, but a user-defined network. If you look at the output of the ip command that listed your networks, you can find the IP of the docker0 which is in a different subnet. You inspected “appnet” which has nothing to do with the default docker bridge.

I wrote about Docker Network here (sorry if I already recommended it in an other topit to you): Docker network and network namespaces in practice - DEV Community

I don’t understand the code snippets about the entrypoint. I mean which code was quoted from what file. It would help a lot if you could share a compose file and maybe a Dockerfile if you build your image. Of course remove the secret data if there is any.

A part of the topic title is “docker0 state DOWN” which is completely normal in your case. “bad gateway” isn’t, but unless you can share more details about how your config looks like, what image you are using, all we can do is guessing. You are right that “no such file or directory” indicates that the file is not in the container and if the entrypoint refers to that file that can make the container fail so the Nginx can’t reach it. But if the container is not even running, than you don’t have to debug the network. Just find out why the entrypoint or command is wrong.

1 Like

This is good to know

You can see the Bad Gateway 502 error is returned because of that premature closing of the connection. GET / HTTP/1.1 502 529 as part of the error.

As you can see.

My bad. I misread it and thought they were both on the same 172.17.x.x subnet. I see what you mean by it not being docker0. That makes more sense to me now.

Because when I type docker compose up -d it tells me that the network is created. I don’t necessarily think I should see it in a list of running containers but if don’t think it should be hidden from me if it’s built using the same command as the containers. Perhaps its just my own taste but If I’m going to build it together I want to see it all together.

As you said, its solved even if I don’t agree with the method.

The entrypoint snippets came from my docker-compose.yml I will work on getting the docker compose file.

The container is running. Or at least docker reports that its running. I thought up until today it was a network issue because of the 502 Bad Gateway. However I’ve switched gears a little as I’ve been investigating due to this entrypoint which doens’t seem to make any sense. I have a suspicion that my container is running but not actually loading my app which would result in a no header being sent and just closing and nginx reporting a bad gateway.

I have attempted t fix this entrypoint and command in the compose but I’m just taking a stab because the 2 versions i have of the docker-compose are so different from each other and I’m not even sure what they are supposed to be accomplishing. The documentation doesn’t explain it in a way that I understand.

Here is my docker-compose.yml:

version: "3.9"

services:

  web:
    env_file:
      - ${root_directory}/web/server.env
    ports:
      - "3005:3005"
    container_name: web
    image: app
    build:
      context: ${root_directory}/web/
    volumes:
      - type: bind
        source: ${root_directory}
        target: /app
      - type: bind
        source: ${root_directory}/web/bin
        target: /var/www/app/bin
      - type: bind
        source: ${root_directory}/web/public
        target: /var/www/app/public
    networks:
      - appnet
    entrypoint: ["node"]
    command: ["/var/www/app/node_modules/.bin/nodemon", "app.js"]

  proxy:
    image: nginx
    container_name: proxy
    volumes:
      - type: bind
        source: ${root_directory}/certs
        target: /etc/nginx/cert
      - type: bind
        source:${root_directory}/nginx/nginx.conf
        target: /etc/nginx/conf.d/app.conf
      - type: bind
        source: ${root_directory}/nginx/conf.d/app.conf
        target: /etc/nginx/nginx.conf
    ports:
      - "443:443"
      - "80:80"
    entrypoint: ["nginx", "-g", "daemon off;"]
    networks:
      - appnet

  db:
    container_name: db
    networks:
      - appnet
    image: mongo
    env_file:
      - mongo/mongo.env
    volumes:
      - type: bind
        source: ${root_directory}/mongo/mongodbhome
        target: /home/mongodb
      - type: bind
        source: ${root_directory}/mongo/data
        target: /data/db
      - type: bind
        source: ${root_directory}/mongo/init/init.js
        target: /docker-entrypoint-initdb.d/init.js
        read_only: true
    command: ["mongod", "--auth"]

  cache:
    container_name: redis
    image: redis:alpine
    networks:
      - appnet

networks:
  appnet:
    name: appnet
    driver: bridge

My questions about the entrypoint apply to the web service.

To further help I here is my file structure:

project-directory
    |-->web
        |-->bin
                |-->server
                       |-->app.js
    |-->mongo
    |-->nginx

Hi! So, as I just posted in the other thread, I solved my problem. In the end, I had to use [exec](https://docs.docker.com/engine/reference/commandline/exec/) (i.e., docker exec -it mycontainer sh) and look around the files. Removing Nginx, other containers, etc. from the equation, curl from within the container to port 8080 failed. So, indeed, the app was broken.

It was probably no longer a Docker problem but an app problem. Alas, I had similar problems to you. Not just no documentation, but several versions of configuration files. Having source code in a Git repository is great, but it isn’t good if it was checked out in several locations and which was the correct one was unknown.

But, the one in the Docker container at least had the running container (it was running 2 weeks ago). So, I copied it out and rebuilt it on a VM. It took 2 days, but it worked in the end…

I understand your pain…when it’s not an Nginx problem, not a Docker problem, … just a problem of looking after something undocumented and poorly organised when viewed by someone else’s eyes.

If you want to do what I did, then look at the Dockerfile (if you have one; perhaps your app starts another way). Our application was using Ruby on Rails, but it sets up the environment and runs the command that starts it all. Indeed, if you isolate the application from everything else, a simple curl should get a response…

I wish you luck!

Ray

1 Like

Indeed. I didn’t notice it, thanks.

When you create a script to start a service and all its dependencies you don’t necessarily implement the stop feature or inspection in the script because that’s the easy part. Docker Compose just let’s you define what Docker has to create and also removes everything when you run docker compose down. I agree, it would be nice to have a way to list everything that was created by compose, but that means there are some subcommands which are not implemented. Each subcommand do one thing only. It can change in the feature and we can suggest new changes in the roadmap: Issues · docker/roadmap · GitHub

But you had two different entrypoints. In your shared compose file there is one. Your compose file seems right so your suspicion that the issue is caused by the application could be true.

Regarding nginx, have you considered using nginx-proxy instead of manually configuring Nginx? Nginx-proxy would automatically detect your containers and configure itself.

https://hub.docker.com/r/nginxproxy/nginx-proxy

Yes that is the confusion that I had. One of them was from the live and working website and the other was from the github repository. But the one from the live and working website didn’t work on my machine either. Anyway I have actually solved my issue and will be posting the solution soon.

This was entirely my problem as well.

In my case, I thought the app was not loading and the container was not actually running the correct app. However, once I got down into the code the “app.js” was internally set to listen to port 3005:

const { PORT } = APP_CONFIG
...
other code...
...
http.createServer(app).listen(PORT)

Port was defined by importing from another file:

const {
    BASE_DIR,
    PATH,
    PORT
} = process.env;

export const APP_CONFIG = { BASE_DIR, PATH, PORT};

Seems these environment variables are not being set. If I hardcode PORT to 3005 then the app loads (albeit with other errors related to environment variables) but it loads and runs so Docker is not the issue it is the code. I was looking at Docker so closely I was blinded to the App potentially being the issue. Of course if I had proper documentation on the app this would not have been a problem.

I have not dug into the code completely yet as this discovery happened literally at the end of the day on Friday but I believe that I will just need to debug the code and find the environment variables needed to run the app.

Thanks for everyone’s help.

2 Likes

It makes me wonder how the code [i.e., mine] ran in the first place. But now that it is running and we are looking for a vendor to redo it from scratch, I guess I won’t go into it as I have other priorities.

But you said it well – I/we kept thinking it was an issue with Docker. Having said that, the discussion in both threads help me troubleshoot the Docker container to isolate the problem to the app.

Thank you for sharing your problem! Good not to be alone with such an overwhelming problem to solve!

Ray

1 Like