How to Access Postgres Service from Child Docker Container in Gitlab-CI?

Hi,

Hopefully posting this in the correct forum. The issue relates to accessing a docker container from within a child docker-compose process running in docker:dind instance. I am using a gitlab CI docker executor build with the following architecture.

The official gitlab-ci documentation recommends configuring a postgres instance as a service in .gitlab-cy.yml. CI jobs defined in .gitlab-ci.yml, are able to connect to the postgres instance via the service name, ‘postgres’.

The tusd, minio and listener containers are spawned within a docker-compose process, triggered inside the pytest CI job. The listener container writes information back to the postgres database.

The listener container is unable to communicate with the postgres container using the hostname, ‘postgres’. The hostname is unrecognised. How can the listener container communicate with the postgres database instance?

Do I use the IP address of the postgres container or the shared gitlab-runner? If so, how do I determine the IP address?

Update 11/1/2019
Resolved the issue following the advice of @paulxroot. Moved all services into docker-compose file so that they can communicate with each other. This includes the postgres container etc…
Tests are now invoked via docker-compose run command.

Now able to successfully run tests using gitlab-shared runner…

Kind regards

dcs3spp

Hello dcs3spp,

In docker two services can see each other if they are in the same network.
I did not test it, but I think your dind container is in the same network as postgres and can connect to it via the service name ‘postgres’. The docker containers on top of the dind container are somehow isolated because of their by default created docker-compose network.

If that’s the case, then one option would be to give the docker containers running on you dind container the ‘host’ network. See: https://docs.docker.com/network/host/

Another option would be to not use a dind container at all, but to use a gitlab-shell-runner and create a docker-compose thing with listener,minio and tusd directly on the real host without your dind layer. Then you would not specify the postgres service in the gitlab-ci.yml but make sure in the compose-file, that the postgres container and the listener etc. are in the same network, so they can connect. Of course if you need a fresh and unique instance of your postgres for each pipeline run, you had to make sure it is created together with listener, etc. and therefor you would probably want to include the postgres service in you compose file.

Hi paulxroot,

Thanks for responding with the two solutions. Much appreciated.

I am following the docker-in-docker executor template from gitlab CI.

The isolated docker-compose network explains why the postgres hostname cannot be resolved from the listener container.

I tried running a docker postgres container as a daemon from within the pytest container, as an additional build step. I used docker inspect to retrieve the IP address of the postgres container , which I then used in the db host url configuration settings. The result, the database connection timed-out…

In the first instance I will try an investigate the first option of using the host network since it may have the less change impact.

With option two would this allow the pytest instance to use the postgres service in the compose file?

Kind regards

dcs3spp

Hi,

I don’t know what you mean exactly with the pytest instance.
All services defined in a compose-file can contact each other because they are all part of the network that docker creates by default for each compose. That means if you add a service postgres to your compose file (just like your other services ‘listener’, ‘minio’ and ‘tusd’), all other services of the compose instance can communicate with the postgres service. This is default network is only created if you don’t specify a custom one inside the compose file.
Of course that option would mean the postgres container is also running on your dind container and you don’t specify it in your gitlab-ci.yml. In my eyes this is the preferrable option if you want a unique instance of postgres for each pipeline run.

Hi,

Apologies, will briefly explain the context. The pytest instance is a CI job that installs a python rest api and runs pytest to perform unit and integration testing. The tests make requests to the rest api instance, which in turn reads and writes from a postgres database backend. The docker-compose containers provide an upload feature. When the upload is complete the listener container writes meta data back to the postgres database backend. In summary the postgres database backend is shared between the rest api and docker-compose process.

The postgres database instance is a docker image pulled from the project’s gitlab container registry for each pipeline run. The docker image setups the initial state of the database, e.g. loads look table scripts etc.

Kind Regards

dcs3spp

Hi paulxroot,

Since posting, I briefly tried a test job, listed below, together with docker-compose file. In this approach I have created two containers in the CI job:

  1. postgres: Postgres database instance.
  2. restapi: Starts pytest. Some tests spawn the docker-compose services. Both the docker-compose services and restapi write to the same postgres database.

Here all docker-compose services are on the same network as the postgres and pytest containers.

build:
  stage: build
  variables:
    DOCKER_DRIVER: overlay2
    DOCKER_HOST: tcp://docker:2375
    SHARED_PATH: ${CI_PROJECT_DIR}/fileserver
    POSTGRES_DB: ${PG_DB}
    POSTGRES_PASSWORD: ${PG_PASSWORD}
    POSTGRES_USER: ${PG_USER}
  services:
    - docker:dind
  script:
    - docker -H $DOCKER_HOST network create -d bridge localnet
    - docker -H $DOCKER_HOST run -d --env POSTGRES_DB=${POSTGRES_DB} --env POSTGRES_USER=${POSTGRES_USER} --env POSTGRES_PASSWORD=${POSTGRES_PASSWORD}  --name postgres registry.gitlab.com/plantoeducate/api-db:latest
    - docker network connect localnet postgres

    - docker -H $DOCKER_HOST build -t restapi:latest --build-arg ARG_CI_JOB_TOKEN=${CI_JOB_TOKEN} --build-arg ARG_CI_REGISTRY=${CI_REGISTRY} --build-arg PG_USER=${PG_USER} --build-arg PG_PASSWORD=${PG_PASSWORD} --build-arg PG_DB=${PG_DB} --build-arg TOKEN=${TOKEN} .
    - docker -H $DOCKER_HOST run -v /var/run/docker.sock:/var/run/docker.sock --network localnet --privileged=true restapi:latest

With this approach the docker-compose services start but the restapi:latest container times-out when trying to access the service name. I think this is because I am binding volume /var/run/docker.sock.

If I remove the volume binding the docker-compose service is unable to start when it is spawned by the rest:api container. It is unable to locate a docker daemon on http+docker://locahost…I then tried setting the DOCKER_HOST environment variable to tcp://docker:2375 when building the restapi container. However, it was unable to address the docker:dind service.

I have listed below the docker-compose file used for the CI build…

version: "3.7"

networks:
  localnet:
    external: true

services:

  minio:
    env_file:
      - ./config/.minio.test.env
    volumes:
      - ./minio-test:/data
    networks:
    - localnet


  tusd:
    env_file:
      - ./config/.tusd.test.env
    volumes:
    - ./certs:/server/certs
    networks:
    - localnet


  uploaded-listener:
    env_file:
      - ./config/.uploaded-listener.test.env
    networks:
    - localnet

Kind Regards

dcs3spp

Hi,

regarding your main question:

With this approach the docker-compose services start but the restapi:latest container times-out when trying to access the service name

You could try referencing the service by its container name and see if that works. You can see it with:
docker ps -a
Or you could set the container name inside your compose file with Overview | Docker Docs
container_name: my-container


Design flaw, that could possibly be part of the problem:
You are using the dind container as a service inside your gitlab-ci.yml. It doesn’t make sense as far as I understand what you are trying to do. If you want to use dind with gitlab ci, then use the gitlab docker executor with docker:dind as its image and stop using the docker -H parameter. Because when you use the docker executor with the dind image, your runner will execute everything under the section ‘script’ on your dind container. See: Docker executor | GitLab

At first I wrote the following, but I think this is already working for you: If you would really want to use docker -H , you had to make sure the docker daemon is accepting commands on its HTTP-API via adding that in /etc/docker/daemon.json on the system where the docker daemon is running. See Solution here: Docker executor | GitLab

As far as I understand you want to call docker-compose inside your restapi container and start the compose containers on your dind container, but not on top of the restapi container, but “beside” it. Ok well, if this is the only purpose of your restapi container, then call docker-compose directly on your dind container and don’t use that restapi container at all. You can call docker-compose directly on the dind container by using the gitlab docker executor like described above, or (when using the shell executor like you did) with docker exec dind-container-name docker-compose -compose options and so on- .

Best regards

Hi,

Cheers, really useful and helpful advice :slight_smile: I am newbie to all this and experimenting to understand the technology…

Have now removed all -H switches and using dind:service.

build:
  stage: test
  variables:
    DOCKER_DRIVER: overlay2
    DOCKER_HOST: tcp://docker:2375
    SHARED_PATH: ${CI_PROJECT_DIR}/fileserver
    POSTGRES_DB: ${PG_DB}
    POSTGRES_PASSWORD: ${PG_PASSWORD}
    POSTGRES_USER: ${PG_USER}
  services:
    - docker:dind
  script:
    - echo "Creating bridge network..."
    - docker network create -d bridge localnet

    - echo "Creating postgres container..."
    - docker run -d --network localnet --env POSTGRES_DB=${POSTGRES_DB} --env POSTGRES_USER=${POSTGRES_USER} --env POSTGRES_PASSWORD=${POSTGRES_PASSWORD}  --name postgres registry.gitlab.com/plantoeducate/api-db:latest
    
    - cd app-rest-api

    - echo "Building pytest container..."
    - docker build -t restapi:latest --build-arg ARG_CI_JOB_TOKEN=${CI_JOB_TOKEN} --build-arg ARG_CI_REGISTRY=${CI_REGISTRY} --build-arg PG_USER=${PG_USER} --build-arg PG_PASSWORD=${PG_PASSWORD} --build-arg PG_DB=${PG_DB} --build-arg TOKEN=${TOKEN} --build-arg ARG_DOCKER_HOST=${DOCKER_HOST} .

    - echo "Running pytest container..."
    - docker run --network localnet --privileged=true -v /var/run/docker.sock:/var/run/docker.sock restapi:latest

Yes you are right :slight_smile: The docker-compose process is triggered from inside restapi container. I created the restapi image so that I could specify the network when running the container. The same network (external) is specified in the docker-compose file. I wanted to try out addressing the postgres container in the postgres connection url configuration of the python app contained within the restapi container. I am not sure if it is possible to specify the network the CI runner script uses in the gitlab-ci.yaml. I have read about the network_mode configuration property for a specific runner
From what I understand so far, I do not think it is possible to specify a custom network for a shared runner.

The restapi container installs dependencies for python source code and starts pytest to run unit and functional tests. Tests read and write to postgres instance. Each upload functional test spawns a docker-compose process via the lovely-pytest-docker docker_services fixture. When an upload is completed it writes metadata back to the same postgres database.

Will keep on trying and will test using the container names…

Many thanks for the suggestions and ideas. Appreciated :smiley:

You’re welcome. I’m no expert, but I have built a CI-Pipeline with Gitlab and docker, too.

I don’t think your gitlab-ci.yml is working. If you want to have a dind container and execute everything under ‘script’ on it, instead of ‘services: - docker:dind’, you need
image: - docker:dind
https://docs.gitlab.com/runner/executors/docker.html#the-image-keyword

As already mentioned, everything under the section ‘script’ will be executed on this image. By the way, it’s important to know that your git repo will be automatically cloned to the container, that’s how you get your compose file into it.

Now you can create containers on top of your dind container. I can see no reason to add the dind container to a specific network like you wanted to.
As I mentioned some posts ago I would just put everything that has to be connected and used together into the same compose. That is what docker-compose is made for!

That’s why I would put the postgres and every other container you need for your CI-Job into your compose-file. You can specify dependecies between the containers with 'depends_on . Then call docker-compose.
As already stated, that is possible if you always want a new instance of your containers for each CI-pipeline run.
Why do you think this solution is unsuitable for you?

Best regards

Thanks paulxroot. Not sure that I am correctly understanding the difference between services docker:dind and image:docker:dind.

My current understanding is based on docker executor whereby image: is used to specify the docker image used to spawn a container to run the CI script. If I also specify services: docker:dind and set the DOCKER_HOST variable then any docker commands in the build script will connect to the daemon in docker:dind container? I have been using a custom image inheriting from python:3.6.7-alpine that has docker, git etc. installed.

If I went down the route of using image: docker:dind then any other command utilities that I wanted to use in the CI script would similarly have to be installed in an inherited image.

It might take some refactoring for the everything under compose approach…however beginning to think that this might make life easier to get a CI build working…

Thanks again for your patience and advice. Will tryout the various options…:smiley:

Kind regards

Simon

Hi,

Just a quick note of thanks again. I did some refactoring and everything is now running within docker-compose process. Tests are now invoked via docker-compose run command.

Now able to successfully run tests using gitlab-shared runner…

Thanks again for your patience and advice. Appreciated :smiley:

Kind regards

dcs3spp