Container works running standalone but not as a service in a swarm

I’m setting up a logstash container to receive and parse syslog messages shipped from a DHCP server. I can verify that these messages are being received on the docker host, but not in the logstash container running on that host. Docker is running in swarm mode with one manager and two workers. The logstash container was deployed as part of a stack. When running as a standalone container the syslog messages get to the container, but when I deploy it to the swarm the syslog messages are no longer reaching the container. The odd thing is that even though this isn’t working when deployed in the swarm, I can log into the DHCP server and using echo and nc I can successfully send messages to the logstash container running in the swarm.

To put it simpler, sending messages from remote host to logstash container doesn’t work when the container is deployed as a service in a swarm (while test messages sent from the same remote host to the same port are received) but everything works as expected when the same image is run as a standalone container.

I’m sure there’s something simple I’m missing here. Any advise is greatly appreciated.

Also, this is a custom image but is constrained to run only on the manager node which has the custom image available to it.

Are all of these nodes linux machines?

Is it safe to assume that the purpose of this thread is just to share your experience with others?

Yes, the manager is Ubuntu 18.04, the workers are Centos 7.something.

Correction…the 2 workers are Red Hat 7.7

No, the purpose is to obtain advice on how to proceed in troubleshooting.

Oh, judging by the details you provided so far there is nothing to realy work with… which led me to the understanding that your just want to share your experience with us.

Please share the service declaration in your compose.yml you use when deploying the container as a service and when deploying it as a plain container.

Compose file used when deployed as a service:

version: '3.4'
services:
  logstash:
  image: logstash-dhcp-syslog
    deploy:
      replicas: 1
      placement:
        constraints:
          - node.role == manager
      restart_policy:
        condition: on-failure
      volumes:
        - /etc/logstash:/usr/share/logstash/pipeline
      ports:
        - "514:514/udp"

Command when run standalone:

$ build -t logstash-dhcp-syslog . && docker run -d --net=host --name logstash-dhcp-syslog logstash-dhcp-syslog

Dockerfile used in the build:

FROM logstash:7.6.2
WORKDIR /usr/share/logstash
USER root
COPY logstash.conf pipeline/logstash.conf 
COPY logstash.yml config/logstash.yml
EXPOSE 514/udp
ENTRYPOINT logstash

Is there a network declaration that your removed in your service? or are you using the implicitly created default network for the stack?

Your containers simply use different typo of network. with --net=host, the container shares the network namespace of the host, thus behaving like your host would do. While your swarm service container uses a published port on a bridged or overlay network.

Try this declaration please:

version: '3.4'
services:
  logstash:
    image: logstash-dhcp-syslog
    deploy:
    replicas: 1
    placement:
      constraints:
        - node.role == manager
    restart_policy:
      condition: on-failure
  volumes:
    - /etc/logstash:/usr/share/logstash/pipeline
  networks:
    myhost: {}

networks:
  myhost:
    external:
      name: "host"

I added a declaration for a network, which uses the hosts network and added it to your service declaration.

I had an indentation typo in when I posted my compose file. Notice the volumes and ports sections:

version: '3.4'
services:
  logstash:
    image: logstash-dhcp-syslog
    deploy:
      replicas: 1
      placement:
        constraints:
          - node.role == manager
      restart_policy:
        condition: on-failure
      volumes:
        - /etc/logstash:/usr/share/logstash/pipeline
      ports:
        - "514:514/udp"

That said is what you posted supposed to be:

version: '3.4'
services:
  logstash:
    image: logstash-dhcp-syslog
    deploy:
      replicas: 1
      placement:
        constraints:
          - node.role == manager
      restart_policy:
        condition: on-failure
      volumes:
        - /etc/logstash:/usr/share/logstash/pipeline
      networks:
        myhost: {}

networks:
  myhost:
    external:
      name: "host"

With services.logstash.networks instead of services.networks?

yep, indention problem. Please try and report back.

We’ll I’ll be…that does it. You are awesome, thank you so much!

Question…I’m deploying back-end services along with this container that I need this container to be able to reach. Should I add a second network and add that to the other containers and add both networks to this logstash container?

If you have more than a single master, you might want to consider to switch from a replicated deployment to a global mode deployment.

Please try your idea and if it doesn’t solve your problem, report back what you exactly did and what the actual outcome was.