Hi,
Actually, I’m having the (almost) same issue with Spark (while running github.com/Logimethods/docker-nats-connector-spark/blob/underscore/compose/docker-compose.yml) :
org.apache.spark.SparkException: Job aborted due to stage failure: Task creation failed: java.lang.IllegalArgumentException: Illegal executor location format: executor_spark-slave2.**compose_**default_0
The Spark code that raises that exception is the following one:
where it is specified that
// We identify hosts on which the block is cached with this prefix. Because this prefix contains
// underscores, which are not legal characters in hostnames, there should be no potential for
// confusion. See RFC 952 and RFC 1123 for information about the format of hostnames.
Take note that I made sure that no name makes use of “_”, the “compose_” being automatically added to network name (“default” or the one specified on the docker-compose.yml file).
Same as for the OP, the code that eventually throws an exception is called from a Java application. I tried to run some Spark Streaming code (different that this one) from Spark-Shell linked to the same kind of Docker Based Spark Cluster without issue.
Also, the code was working perfectly well when Spark was not set to make use of a cluster
(new SparkConf().setAppName("NATS Data Processing").setMaster("spark://spark-master:7077");
)
but to run locally
(new SparkConf().setAppName("NATS Data Processing").setMaster("local[2]");
)
docker version (native Docker on Mac)
Client:
Version: 1.11.1
API version: 1.23
Go version: go1.5.4
Git commit: 5604cbe
Built: Wed Apr 27 00:34:20 2016
OS/Arch: darwin/amd64
Server:
Version: 1.11.1
API version: 1.23
Go version: go1.5.4
Git commit: 8b63c77
Built: Sat May 28 11:54:55 2016
OS/Arch: linux/amd64