Docker Community Forums

Share and learn in the Docker community.

CURL requests between docker containers

Background

I’ve inherited a codebase for a system. For reference this is a symfony project.
The eventual plan is to move to a microservices architecture. The microservice responds only to web requests and is using JSONAPI to transfer data.

I’ve separately developed a new microservice and on its own. It has tested and performed well. It is not feasible to take the old system offline, so the approach is to transition service by service depending on business and infrastructure needs/possibility.

There are two separate docker container sets, one for “the old system” and one for “the microservice” (and its future other services). Both have various containers including php74.
To be clear, I mean "there are two separate docker-compose.yaml files that both are to be kept separate that both are run using docker-compose up.

As I have two web facing container sets, I’ve mapped them as follows:

  • "80:80" “the old system”
  • "8080:80" “the microservice”.

These do not clash on web traffic.

The microservice does plenty, but notably it runs a “hello world, welcome to the API” style response to a index GET request (let’s call it microservice.test).

Within “the old system”, I’ve mapped to the new system (I have tried both of the following methods)

    networks:
      default:
        aliases:
          - microservice.test

as well as:

    extra_hosts:
      - "microservice.test:127.0.0.1"

As discussed, microservice.test responds on my local machine.

If I log into “the old system” php74 container, I can ping microservice.test and this is successful.

If I run a unit test from “the old system” and use cURL to download the index request, I get a null response. Note: if I ping or curl google.com I get a response.

The next step is to use the provided microservices client… but the above debugging step shows the issue.

Effectively, when php attempts to access the domain microservice.test it cannot see it.

Question

How can I ensure that a docker container can access:

  • a domain hosted on a different docker container set (different docker-compose.yaml file)

I would ideally like the web requests to fully leave one container set, then be bounced back… much like real microservices infrastructure would be (eg: possibly on different servers/hosts etc). Is this possible?

Note:

  • This must use web only traffic
  • php container can ping the microservice.test domain successfully
  • cURL fails to get a response the microservice.test domain
  • cURL successfully gets a response from external urls, such as google

Thanks in advance.

I spent the day moving this to the “old system” docker just to get some software progress. It also doesn’t work.

I’ve been googling this for a few weeks, it seems to be some kind of docker limitation or bug. Other people seem to have exactly the same situation (and same head scratching response).

To be clear: this seems to be : “not an issue with multiple docker-compose.yaml files”.
It seems to be “you can’t route out of and back into docker”

I’m assuming it’s trying to be “clever” and being too clever for its own good.

Given this extra info, does anyone know how to stop this “clever behaviour”?

Thanks