Docker Community Forums

Share and learn in the Docker community.

Trouble Networking 3 containers in docker swarm config

Hi Docker Community,

So I’m networking three containers on the same overlay network and I’m having issues with communication between the containers. I’m currently running docker swarm and setting up a swarm with 3 containers. I’m linking a rasa server to a chatbot container and using an ngrok container because we need to expose the chatbot to a service for testing. I’m doing this through a docker compose file.

I’m new to docker and docker swarm so I don’t know if I’m properly linking the three properly and I could use some help with understanding overlay networking better and also how overlay networking functions when you need to expose services in the overlay network outside of the network.

I have included my Docker compose

version: '3.5'

services:
    rasa:
        image: rasa/rasa:2.8.9-full
        container_name: rasa
        ports:
        - "5005:5005"
        volumes:
        - "./models:/home/ubuntu/datascience/chatops/opsdroid_deployment/models"
        - ./rasa_endpoint.yml:/home/ubuntu/datascience/chatops/opsdroid_deployment/rasa_endpoint.yml
        expose: 
         - "5005"
        networks:
          - opsdroid         
        command: >
         run 
         --enable-api 
         -m /home/ubuntu/datascience/chatops/opsdroid_deployment/models/slate_faq_model_v5.tar.gz 
         --endpoints /home/ubuntu/datascience/chatops/opsdroid_deployment/rasa_endpoint.yml 
         -t rasa_secret_token -vv
        restart: always

    opsdroid:
        image: technolutions/opsdroid-image:dev_v3
        env_file:
         - /home/ubuntu/datascience/chatops/opsdroid_deployment/opsdroid_secrets.env
        depends_on:
          - rasa
        links:
          - rasa
        networks:
          - opsdroid
        ports:
        - target: 8080
          published: 8080
          protocol: tcp
        expose:
         - "8080"
        volumes:
         -  opsdroid:/root/.config/opsdroid:ro
         - "/home/ubuntu/datascience/chatops/opsdroid_deployment/skills/__init__.py:/skills/__init__.py:ro"
         - "/home/ubuntu/datascience/chatops/opsdroid_deployment/sec/opsdroid-cert.pem:/sec/opsdroid-cert.pem:ro"
         - "/home/ubuntu/datascience/chatops/opsdroid_deployment/sec/opsdroid-private-key.pem:/sec/opsdroid-private-key.pem:ro"
         - "/home/ubuntu/datascience/chatops/opsdroid_deployment/config/configuration.yaml:/configurations/configuration.yaml:ro"
        configs:
         - source: opsdroid_conf
           target: /root/.config/opsdroid/configuration.yaml
        command: >
         start
         -f /configurations/configuration.yaml
        deploy:
          restart_policy:
            condition: any
            delay: 10s
            window: 60s
    ngrok:
      image: wernight/ngrok:latest
      env_file:
       - /home/ubuntu/datascience/chatops/opsdroid_deployment/opsdroid_secrets.env
      ports:
      - target: 4040
        published: 4040
        protocol: tcp
      expose:
        - "4040"       
      environment:
        NGROK_PROTOCOL: http
        NGROK_PORT: opsdroid:8080
        NGROK_AUTH: ${NGROK_AUTH}
        NGROK_USERNAME: ${NGROK_USER}
        NGROK_HOSTNAME: ${NGROK_HOSTNAME}
        NGROK_REGION: us
      depends_on:
        - opsdroid
      links:
        - opsdroid
      networks:
        - opsdroid
networks:
  opsdroid:
    driver: overlay
    attachable: true

configs:
  opsdroid_conf:
    file: /home/ubuntu/datascience/chatops/opsdroid_deployment/config/configuration.yaml

volumes:
  opsdroid:

Are you sure? Docker Swarm doesn’t support links but you shouldn’t use links with Docker Compose either. That is a legacy feature for years now.

If you want to run your containers in a swarm cluster, you need to use docker stack deploy -c docker-compose.yml yourstackname

Then you should see the warning when the stack starts:

Ignoring unsupported options: links

Containers on the same network should see eachother without links.

1 Like

Hi @rimelek ,

Thanks again for your feedback, I appreciate it.
Yes, I am running Docker Swarm, and I have seen that message.
So link and restart are currently deprecated, so I can take those arguments out.
Also, it does warn me about exposing ports on my containers but do you need to do that if you intend to expose that service to the internet?

Thanks agian,

-Brian

Links were a necessity in good old docker run days with the default bridge network, where service discovery is not available.

Each user defined network has build-in service discovery based on dns.

You can loose the expose declarations as well, they have no effect. Expose doesn’t do anything by itself, except for linked containers (which you don’t want to use) and -P in docker run to publish all ports. Appart of that it has pure documentational character and does exactly nothing. The “ports” element is responsible to publish a container port on a host port. You will have still have to take care of bringing “the internet to the host” yourself.

True for “link”. Regarding restart: it is one of the elements that was moved underneath the “deploy” element in the v3 schema, see: Compose file version 3 reference | Docker Documentation

1 Like

Okay cool, thanks @meyay , I appreciate it!
The other thing I’m still looking into is how to properly expose containers that are on an overlay network to the internet. One of my containers is meant to connect to a service that requires you to expose it to the internet so I’m still trying to understand how to go about that in a docker swarm config.

With swarm deployments (=docker stack deploy -c) published ports are using the ingress routing mesh, which will bind the publish port to each swarm node.

Though, this is where the responsibility of swarm ends. Everything between your public internet IP and the published ingress port is in your responsibility.

1 Like

Thanks @meyay , I appreciate the feedback!
So once you specify a published port on a container in swarm mode all swarm nodes will have that public port bound to each node. Cool beans. So if it’s not something within docker swarm perhaps firewall issues then?

Let me re-phrase it: once you publish a container port (refered as target port in the long syntax in compose v3), it will published on all swarm nodes. If you nodes already have a public internet ip (which I for security reasons I hope they don’t), then they should be accessible from the internet, unless firewalls prevent it.

1 Like

Awesome, thanks @meyay!