Docker Community Forums

Share and learn in the Docker community.

JMX monitoring of multiple tomcat containers in Docker EE

Hi does anyone have a solution to the problem of jmx monitoring within a Docker EE swarm? Publishing the jmx ports and accessing via the host node address when the container running tomcat can be started up on any node in the cluster will not work. Tying the container to a node is possible so that the jmx port is known on a given IP address but this seems to defeat the ability to run any container on any node. Multiple containers running tomcat and jmx will further complicate the issue.

Any input much appreciated.

Thanks

Rod

If JMX is a http based protocoll, just put a reverse proxy in front of it and forward the traffic depending on the domain/host name.

In case JMX is not a http based protocoll, there still might be a chance:
– enable TLS for your remote jmx ports (You will need a self signed certificate per container, so maybe it makes sense to generate those in the entrypoint script)
– put a loadbalancer capable of TCP passthrough and SNI in front of it

A TLS extension allows SNI to know which target domain is adressed, without decrypting the tcp traffic This information can be used to forward the encrypted tcp traffic to the target domain/container.

Here is an example on how to configure nginx with TCP Passthrough and SNI: https://blog.le-vert.net/?p=224.

In both cases, you will need valid name resolution for the domain/host names: either globaly valid dns entries or per client by modifiying the local /etc/hosts of each machine.

Thanks for getting back to me but maybe I didnt explain the issue clearly.

In a non-containerized or virtual world JMX is monitored on a fixed port via an fixed ip address endpoint. So servername or ip:9090 for example. The JMX port is configurable for the jvm. A jmx monitoring tool will remotely connect to the jmx port. So far all good we know which server and jmx port we are connecting to.

Now, we can see that multiple containers exposing jmx ports within a worker node will need to be published on unique port. If those containers are tied to a node then that could work as I would know which host ip and container I am monitoring.

If I do not force those containers to run on a dedicated node then they can start up on any worker node in the swarm. Then my jmx monitoring tool has “lost” the container because the unique port assigned to it has moved to an unknown IP endpoint.

I am aware of the situation - thats why I propsed a tcp passtrough loadbalancer.

Other options:
– if possible: put the jmx monitoring agent into a container network
– run swarm services and publish the ports using the ingress network and declare deploy.endpoint_mode: dnsrr to enforce that a valid container ip is resolved instead of the service ip.