Exposing Mapped JMX Ports from Multiple Containers

We have not been able to figure out how to expose JMX ports from multiple containers running a similar process when a common JMX port for the process is mapped to different external ports.
To clarify by example …

A simple Java application is configured to expose its JMX port, with no authentication, using the
following JAVA properties:

-Dcom.sun.management.jmxremote
-Djava.rmi.server.hostname=%TARGET%              (the ip address of exposed VM running the process)
-Dcom.sun.management.jmxremote.local.only=false
-Dcom.sun.management.jmxremote.rmi.port=3968
-Dcom.sun.management.jmxremote.port=3968
-Dcom.sun.management.jmxremote.ssl=false
-Dcom.sun.management.jmxremote.authenticate=false

This works fine for a single instance of this process running in a container that uses the following
Docker port mapping. We can connect to the port via jconsole and see JMX beans:

  -p 3968:3968

There are lots of posts out there that point to the use of the local.only and rmi.port properties
needed to make JMX work at all (took a while to figure this out)

However, if we run a second instance of the process, using the following port mapping,
the JMX data are not accessible. The port does respond, but jconsole cannot connect:

 -p 4968:3968

It seems that there must be something else required when the port is mapped to a different
external port, but we have not been able to figure this out. Seems like a pretty serious limitation,
if we want to run an identical process multiple times and just map each instance to a different port.

Any ideas on how to solve this ?
Anyone else run into the same issue ?

Hi rtview1,

I also ran into similar situation but not exactly. I am starting multiple containers with same process but the JMX ports are getting assigned on for a loop basis. Hence the containers are look like :

Conatiner_Name Mapped JMX ports

Docker1 50001:50001
Docker2 50005:50002
Docker2 50005:50003
Docker2 50005:50004
Docker2 50005:50005

I am starting around 20 containers, but can only access up to first 9 or 10 from outside using jconsole.

Appreciate any kind of help.

Yes, this sounds similar, but not exactly the same.
In our case, I can only see JMX data if the jmx port is defined and mapped exactly the same way,
i.e. using exactly the same port. I we use “port mapping” in docker, we cannot even connect with jconsole
to the mapped port. It only connects if the mapped port is the same as the internal exposed port.

It is interesting that you are able to see some of the containers’ jmx data.
I wonder what you are doing differently. Perhaps this could be a clue.
Can you share your Dockerfile and docker command line ?

Ok.

Can you please tell me the following things:

What is the Linux distribution you are using for docker host?
What is the time gap between starting of 2 containers?
Did you checked if the assigned port is free before using it for containers?
Can’t you pass the JMX_PORT and RMI_PORT as Java opts?

We are using Oracle Linux v. 7 (based on RedHat 7.3)

We’ve started these containers manually at various times, so there doesn’t appear to be any correlation
with the time between starts. The case described above always fails.

Yes, the ports are free (and we’ve tried different ports)
We are using Java Opts to set the options. All of the options
mentioned in the first post, like "-Dcom.sun.management.jmxremote.port=3968"
are defined in JAVA_OPTS inside the container when the containerized Java processes start.

The problem seems pretty clearly related to port mapping by docker.
I have no problem if I assign each java process its own unique JMX port inside the container,
and use a unique one-to-one mapping like “-p 3968:3968” when I launch each container.

The problem only shows if I assign the same JMX port (e.g. 3968) inside each container and then try to
map it externally, by using something like “-p 4168:3968” on one and “-p 4268:3968” on another.
The ports to show as “open” on the Linux host, but I cannot connect to them via jconsole.

If the ports are defined internal to the container using "-Dcom.sun.management.jmxremote.port=4168"
and the external mapping is “-p 4168:4168” then it works just fine. So clearly the ports are all availble.
It just appears the the Docker mapping mechanism may not be working correctly for these JMX ports

Note: mapping for other types of ports is working fine !.. it is only JMX that shows this issue.
We use multiple ports for other purposes and they all map just fine. This seems to be a Docker/JMX issue.

Perhaps someone from Docker could comment on this ?

In my experiments trying to get JMX working in Docker I have found the same issue. You will find 99.9% of examples and posts where JMX is working in Docker will run the container and specifically map the container JMX port to the same port on the host. If you do this everything works. If as the, poster above found, you leave Docker to NAT the ports or you specifically configure non-matching ports then things do not work.

From what I can see by debugging the JMX client, this is because the client connects to the server on the NAT’ed address and asks for the URL of the RMI server. The URL returned contains the correct IP address specified by -Djava.rmi.server.hostname but the port is the port configured in the container; as it knows nothing about NAT’ed ports. The client then tries to connect to this URL and gets connection refused.

I cannot see any way around this yet other than writing a custom JMX server or custom client code, which I believe is possible as JMX code tends to be pretty pluggable with all sorts of customisations - I just haven’t done it yet.

1 Like

Thanks for the comments, Jonathan …
We came to the same conclusion after studying it a bit more.
While it may be possible to work around this issue with some custom JMX code, we have chosen to define the ports inside the container using env variables … that way we can always use a one-to-one mapping. A little annoying, yes, but manageable.

Hi,

I must learn to read the poster’s name better - I didn’t realise who you were until you replied :smile:

So as you know I work for Oracle and I’m investigating getting Oracle Coherence running on Docker as we have customers asking about it. After much trial and error I can get most things working. Docker’s networking support being the main issue.

Getting to the point I can make JMX work if I don’t use RMI but use JMXMP instead. It appears that JMXMP is better than RMI and it only requires a single port exposed so works well on Docker. I have had it working with Coherence in about five lines of code; and Coherence makes it easy for me to drop that into my server side (containerised) processes. See my blog on it here: TheGridMan.com Using JMXMP relies on the client (such as your RTView) also having the JMXMP library but that shouldn’t be an issue.

Well, you made it easier with the picture !
I’d love to hear more about your work with Coherence and Docker.
We’ve dockerized just about everything here and it is great (mostly).
I’ll touch base with you outside this thread!

You should be aware that JMX over RNI is communicating its host and port in the protocol. So when you map a port to your host you must set het jmx_server to the host and the port to the mapped port. And the port of jmx should be exactly the same as the jmx port. -p 1099:2099 is not going to work.

I am facing the same problem , binding different ports for same service of different containers in same node.
Anyone can suggest me how to resolve this.