Docker Community Forums

Share and learn in the Docker community.

Calling another container results in Connection refused

Hey,

so I have the following setup:

version: "3.8"

services:
  # VUE-JS Instance
  client:
    build: client
    restart: always
    logging:
      driver: none
    ports:
      - 80:8080
    volumes:
      - ./client:/app
      - /app/node_modules
    environment:
      - CHOKIDAR_USEPOLLING=true
      - NODE_ENV=development

  # SERVER
  php:
    build: php-fpm
    restart: always
    ports:
      - "9002:9000"
    volumes:
      - ./server:/var/www/:cached
      - ./logs/symfony:/var/www/var/logs:cached

  # WEBSERVER
  nginx:
    build: nginx
    restart: always
    ports:
      - 8080:80
    volumes_from:
      - php
    volumes:
      - ./nginx/default.conf:/etc/nginx/conf.d/default.conf
      - ./logs/nginx/:/var/log/nginx:cached
    expose:
      - "8080"

My backend consists of a simple Symfony REST API, my frontend is a vue application. When I now try to call the API from my vue application with localhost:8080 or 127.0.0.1:8080, I get a Connection refused-error. When I try to call the API via my server’s public IP, I run into CORS issues.

Because I make those calls in an executed js file via axios, i’m not able to use the docker container names. Does anyone have an idea?

I assume you’re trying to use Nginx to serve both the Vue.js application and the Symfony API on port 8080, to avoid all CORS issues? We’ll need to see your Nginx configuration then.

I’d actually expect the Nginx container to run on standard ports, that is: 80 and 443. Apparently you’ve got some server in your Vue.js container running on port 8080, for which you can then remove the ports and add expose: 8080 to make that internal port available to other containers (but not to other non-Docker clients on the host). Same goes for Symfony, replace the ports with expose: 9000. Next, make Nginx use ports: 80:80 to allow the browser to connect to port 80, and make Nginx proxy some /api/* to http://php:9000/* and proxy everything else to http://client:8080/*.

Or combine Vue.js and Nginx into a single image, which only proxies some /api/* to http://php:9000/* (which still needs expose: 9000), and use Nginx itself to serve the static Vue.js resources; maybe Substitute environment variables in NGINX config from docker-compose - Stack Overflow can help to create such image.

First, thank you for your detailed reply. This already helped me a lot in terms of unterstanding. I adjusted the docker-compose.yml as you suggested and exposed the ports and removed the mappings. My default.conf looks like this:

server {
listen       80;
server_name  localhost;
root /var/www/public;

# location / {
#     try_files $uri @rewriteapp;
# }

# location @rewriteapp {
#     rewrite ^(.*)$ /index.php/$1 last;
# }

location /api/* {
    proxy_pass http://php:9000/*;
}

location / {
    proxy_pass http://client:8080/*;
}

# location ~ ^/index\.php(/|$) {
#     fastcgi_pass php:9000;
#     fastcgi_split_path_info ^(.+\.php)(/.*)$;
#     include fastcgi_params;
#     fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
#     fastcgi_param HTTPS off;
#     fastcgi_buffers 16 16k;
#     fastcgi_buffer_size 32k;
# }

error_log /var/log/nginx/symfony_error.log;
access_log /var/log/nginx/symfony_access.log;
}

I left the previous content of the default.conf commented out, so that you can get an idea of what it looked like before I made the changes.

Sadly, right now, I can’t reach any of both containers. Docker doesn’t even recognize any of the requests, no matter which port or url I use.

Feels like I’m missing something…

Sorry, this was not intended to be Nginx syntax. Instead, use something like:

location /api/ {
    proxy_pass http://php:9000/whatever/is/the/Symfony/root/;
}

location / {
    proxy_pass http://client:8080/;
}

Also, Nginx is probably logging errors in its log files.

When you replaced the ports of the first containers with expose, then those indeed will only be accessible through the Ningx container, which needs ports: 80:80 (using proper Yaml-syntax).

Alright, so now I’m getting somewhere. At least I get some responses. Just to clear up any misunderstandings:

My current docker-compose.yaml

# VUE-JS Instance
client:
  build: client
  restart: always
  logging:
    driver: none
  volumes:
    - ./client:/app
    - /app/node_modules
  environment:
    - CHOKIDAR_USEPOLLING=true
    - NODE_ENV=development
  expose:
    - '8080'

# SERVER
php:
  build: php-fpm
  restart: always
  volumes:
    - ./server:/var/www/:cached
    - ./logs/symfony:/var/www/var/logs:cached
  expose:
    - '9000'

# WEBSERVER
nginx:
  build: nginx
  restart: always
  ports:
    - 80:80
  volumes_from:
    - php
  volumes:
    - ./nginx/default.conf:/etc/nginx/conf.d/default.conf
    - ./logs/nginx/:/var/log/nginx:cached
  depends-on:
    - client

Right now, when I try the / route, I get an Invalid Host header and on the /api/ route I get a 502 Bad Gateway. The logs doesn’t offer any further information.

I think that this is not affected by server_name in Nginx, so I’d guess that is thrown by whatever server is in the Vue.js container, which may have been configured to, say, only allow the domain localhost, or 127.0.0.1? You could still define ports for that container to see if you could get the same error when accessing it directly, without Nginx as an intermediate proxy server?

For this I’d assume this is thrown by Nginx. So, something is off in your proxy_pass for /api/, I’d guess.

Also, make sure to peek in the logs of the Vue.js and Symfony servers.

…and, Nginx will be making requests for the host name client rather than, say, localhost. So you’ll need to relax the configuration of whatever server is running for Vue.js, or make Nginx set the HTTP Host header to something that this other server knows about (like localhost).

As I said, I’m getting somewhere. Right now, I get some erorr messages, to work with.

So far, the following configuration for my frontend works fine. It is reachable via the / route.

location / {
  proxy_pass http://client:8080/;
  proxy_set_header Host localhost;
}

but via the /api/ route, I’m getting the following error:

2021/05/02 15:03:47 [error] 31#31: *1 recv() failed (104: Connection reset by peer) while reading response header from upstream, client: ** MY OWN IP **, server: localhost, request: "GET /api/ HTTP/1.1", upstream: "http://192.168.144.2:9000/", host: "** MY SERVERS IP**"

When not using Nginx, what would be the URL structure for the Symfony API?

If it’s http://localhost:9000/some/path then the following should work when called using http://localhost/api/some/path through Niginx:

location /api/ {
    proxy_pass http://php:9000/;
}

If it’s http://localhost:9000/api/some/path then make sure to repeat the /api/ part in the target:

location /api/ {
    proxy_pass http://php:9000/api/;
}

For even more complex mappings, you can use ~ to define a regular expression, along with (?<group-name> ...) to capture part of the match for later use:

location ~ /api/(?<target>.+) {
  # Proxy requests for:
  #   /api/some/path/a/b/c
  # ...to:
  #   http://php:9000/something/else/some/path/a/b/c
  proxy_pass http://php:9000/something/else/$target;
}

Also, make sure to test with a full URL that actually works without Nginx. It’s not likely that just calling http://localhost/api/ without any further details in the REST path should return any results? You’ll likely need some http://localhost/api/some/path/a/b/c instead. (Assuming the Symfony REST API even supports HTTP GET.)

I have something like an “debug” route, which is api/user/login. This route just returns a json-object. Before we made those changes, I was able to reach it without any prefix. I’ve already tried it with /api/api/user/login, but it results in the same error.

I also tried to simply insert my old configuration for the /api/ route, which results in a File not found. But at this point I have no idea which path the php container is listenting to, to fix this.

As the PHP server seems to be getting the request and then closing the connection without any error, maybe it’s also expecting a different HTTP Host header?

Sorry, my bad, I even got an error from the log-file:

29#29: *5 FastCGI sent in stderr: "Primary script unknown" while reading response header from upstream

I’d temporarily restore the ports mapping for the PHP container, to make that container accessible from your browser (but also leave the expose in place for Nginx), to ensure that the following works: http://localhost:9000/api/user/login http://localhost:9002/api/user/login.

If that works, then I feel that the Nginx route using the default port 80 should work too, http://localhost/api/user/login, when using:

location /api/ {
  proxy_pass http://php:9000/api/;
  proxy_set_header Host localhost;
}

For the sake of completeness, maybe post your current docker-compose.yaml and Nginx config when the above does not work?

(And aside, once this works, you’ll need some more configuation if you’re also using WebSockets.)

But, wait: were you initially invoking the Symfony API through Nginx already? So, not using port 9002 to access that PHP container directly?

Also, why do you need volumes_from: php?

Yeah, originally I was invoking the php container via nginx. So every request send to the nginx container was send to the php container on port 9000. That’s why the php container has a volume with the symfony folder.

Just to clarify the project structure:

client/
├─ Dockerfile
logs/
nginx/
├─ Dockerfile
php-fpm/
├─ Dockerfile
server/
├─ <symfony-application>

My current docker-compose.yaml

  # VUE-JS Instance
 client:
  build: client
  restart: always
  logging:
    driver: none
  volumes:
    - ./client:/app
    - /app/node_modules
  environment:
    - CHOKIDAR_USEPOLLING=true
    - NODE_ENV=development
  expose:
    - '8080'

# SERVER
php:
  build: php-fpm
  restart: always
  ports:
    - 9002:9000
  volumes:
    - ./server:/var/www/:cached
    - ./logs/symfony:/var/www/var/logs:cached
  expose:
    - '9000'
  
# WEBSERVER
nginx:
  build: nginx
  restart: always
  ports:
    - 80:80
  volumes_from:
    - php
  volumes:
    - ./nginx/default.conf:/etc/nginx/conf.d/default.conf
    - ./logs/nginx/:/var/log/nginx:cached
  depends_on:
    - client

And my default.conf (shortend)

server {
listen       80;
server_name  localhost;
root /var/www/public;

location /api/ {
    proxy_pass http://php:9000/;
    proxy_set_header Host localhost;
}

location / {
    proxy_pass http://client:8080/;
    proxy_set_header Host localhost;
}

error_log /var/log/nginx/symfony_error.log;
access_log /var/log/nginx/symfony_access.log;
}

http://localhost:9002/api/user/login Where should I call it? As the proxy_pass argument?

And it also did some mapping to index.php. I’d expect/want that mapping to be done in whatever server is running in the PHP container.

You mean the Nginx container, right? PHP is not my strong suit, but seeing fastcgi_pass php:9000 I’d very much expect this to be using the network, hence Nginx would not need access to the PHP files, hence would not need some volume mapping?

No, just in your browser. Given the ports: 9002:9000 mapping, the PHP container can be accessed by non-Docker clients. But given your earlier @rewriteapp and fastcgi_param in Nginx, I wonder if that PHP container can be used standalone…

I think your earlier Nginx configuration would map /some/path to index.php/some/path. So, maybe http://localhost:9002/index.php/user/login would work. That’s not nice, of course, but good to kow if that indeed works.

If that works, then one could probably use something like the following in Nginx. But I’d try to add that logic to the PHP container instead:

location /api/ {
  proxy_pass http://php:9000/index.php/;
}

Or, when quickly reading Module ngx_http_fastcgi_module and combining that with your earlier config, without ever having used FastCGI myself:

location ~ /api/(?<target>.+) {
    # Proxy requests for:
    #   /api/some/path/a/b/c
    # ...to:
    #   http://php:9000/index.php/some/path/a/b/c
    fastcgi_pass php:9000;
    include fastcgi_params;
    fastcgi_param SCRIPT_FILENAME index.php
    fastcgi_param PATH_INFO $target
    fastcgi_param HTTPS off;
    fastcgi_buffers 16 16k;
    fastcgi_buffer_size 32k;
}

(Still, I’d rather have the PHP container map the path to index.php.)

Ah, and 4 Docker images, out of which 3 are used in the compose file?

Sadly, I was not able to reach the container via http://localhost:9002/index.php/user/login.

But at least, when I use the config you suggest, the php container recognizes the request:

php_1     | 172.20.0.4 -  02/May/2021:19:43:17 +0200 "GET /api/user/login" 404

But I still get File not found. But this seems to be the correct solution way.

Sorry, my bad, the server-folder doesnt have a Dockerfile.

I wonder If I need to change the root path in my default.conf?

Maybe you need this:

fastcgi_param SCRIPT_FILENAME $document_root/index.php

Also, maybe include fastcgi_params does not work with fastcgi_param PATH_INFO $target? So maybe remove the first? Really just wild guesses.

Also, I find it a bit weird that this still shows /api in the log, which is (or should not) be passed in my example?

(EDIT: removed excessive $ in $document_root$)