Underscore in domain names

Hi,

I ran into an issue yesterday and I’m not sure how much of a Docker or Compose issue it is, or if it was just me that shot myself in the foot.

My RabbitMQ container name had underscore in it and the Java application trying to connect to it was setting up the connection factory by passing it a java.net.URI object create roughly like so: new URI("amqp://guest:guest@"+another's container name via environment variable+":5672").

Deep down, java.net.URI parses the received String (something like “amqp://guest:guest@be_rabbitmq:5672”) and fails on the hostname part. This left the “host” field of the java.net.URI object null.

I have found an interesting answer on whether hostname and domain name can have underscore in them:

For some reason, it never occurred to me that underscore could be dangerous in a hostnames. Now that I think of it, I don’t remember seeing it anywhere.
Anyway, what is Docker or Compose stance on this? Are underscore allowed in container–>dns names?


In much more details, here’s what happened:

I had to manage a set of services in Docker. They all have to be part of the same docker-compose.yml, but there’s two logical groups: backend and frontend. Because some components already had hyphens in their name, I decided to name the containers with the following convention: prefix_project-name. E.g.: be_redis, be_rabbitmq, fe_the-service, fe_the-app.

This worked great for all of the backend stack (RabbitMQ, elasticsearch, Redis, DynamoDB local, etc) and most of the front-end stack (Node.js apps, Nginx router).

However, when I added to the lot a Java application that needed to connect to Reddis, it wouldn’t work.

I pinpointed the problem to that line: CachingConnectionFactory connectionFactory = new CachingConnectionFactory(new URI(rabbitMQUrl));

rabbitMQUrl value was: amqp://guest:guest@be_rabbitMQ:5672

When logging the state of the connection factory, it said that the host was localhost:
log.debug("DEBUG--- connectionFactory: "+connectionFactory); //output: DEBUG--- connectionFactory: CachingConnectionFactory [channelCacheSize=1, host=localhost, port=5672, active=true ...]

So I edited the code to explicitly set the host and logged again:
connectionFactory.setHost("be_rabbitMQ"); log.debug("DEBUG--- connectionFactory: "+connectionFactory); //output: DEBUG--- connectionFactory: CachingConnectionFactory [channelCacheSize=1, host=be_rabbitMQ, port=5672, active=true ...]

That was very intriguing and by digging deeper, as said at the beginning of this thread, I found that java.net.URI parses the received String and fails on the hostname part. This left the “host” field of the java.net.URI object null. The Java RabbitMQ ConnectionFactory protects itself against null hostname and falls back to localhost, should uri.getHost() == null.

Hi,
Actually, I’m having the (almost) same issue with Spark (while running github.com/Logimethods/docker-nats-connector-spark/blob/underscore/compose/docker-compose.yml) :

org.apache.spark.SparkException: Job aborted due to stage failure: Task creation failed: java.lang.IllegalArgumentException: Illegal executor location format: executor_spark-slave2.**compose_**default_0

The Spark code that raises that exception is the following one:

where it is specified that

// We identify hosts on which the block is cached with this prefix. Because this prefix contains
// underscores, which are not legal characters in hostnames, there should be no potential for
// confusion. See RFC 952 and RFC 1123 for information about the format of hostnames.

Take note that I made sure that no name makes use of “_”, the “compose_” being automatically added to network name (“default” or the one specified on the docker-compose.yml file).

Same as for the OP, the code that eventually throws an exception is called from a Java application. I tried to run some Spark Streaming code (different that this one) from Spark-Shell linked to the same kind of Docker Based Spark Cluster without issue.

Also, the code was working perfectly well when Spark was not set to make use of a cluster
(new SparkConf().setAppName("NATS Data Processing").setMaster("spark://spark-master:7077");)
but to run locally
(new SparkConf().setAppName("NATS Data Processing").setMaster("local[2]");)


docker version (native Docker on Mac)
Client:
Version: 1.11.1
API version: 1.23
Go version: go1.5.4
Git commit: 5604cbe
Built: Wed Apr 27 00:34:20 2016
OS/Arch: darwin/amd64

Server:
Version: 1.11.1
API version: 1.23
Go version: go1.5.4
Git commit: 8b63c77
Built: Sat May 28 11:54:55 2016
OS/Arch: linux/amd64

Update: I did try to run the same Spark Streaming code not as a Java Application, but from Spark Shell (running on a dedicated container), with the same outcome (Illegal executor location format: executor_spark-slave2.compose_default_0)…

I’m experiencing the same problems. I use docker-compose on docker swarm with overlay network and try to communicate between the containers. Swarm auto-creates hostnames according to the scheme direcrory_container_number eg. test_haproxy_1 which resolve well. But lots of tools complain about the use of underscores and just don’t work.
I tried different environments with docker 1.11.1

While Docker/Compose can’t do anything about people like me putting underscores in their container names, I think all tools (Compose, Swarm, etc.) should avoid underscores as much as possible. No?

2 Likes

FYI, the Switch to using hyphens as a separator in hostnames #229 issue is still under discussion…

1 Like