I’m trying to deploy my apache spark cluster using the docker-compose
file as defined here across machines working in docker swarm mode.
I’m invoking docker stack deploy -c compose-file.yml spark_cluster
at my docker swarm manager machine to deploy my services as defined but I’m getting the following scenario when I hit docker stack ps spark_cluster
:
ID NAME IMAGE NODE DESIRED STATE CURRENT STATE ERROR PORTS
iy255fvx5ub8 spark_cluster_master.1 sauloricci/docker-spark:latest manager-swarm Running Running 20 seconds ago
mrr6p9dmodh5 \_ spark_cluster_master.1 sauloricci/docker-spark:latest worker2-swarm Shutdown Rejected 35 seconds ago "invalid mount config for type�"
u1daipeekanv \_ spark_cluster_master.1 sauloricci/docker-spark:latest worker2-swarm Shutdown Rejected 40 seconds ago "invalid mount config for type�"
9yup3zxpk4ur \_ spark_cluster_master.1 sauloricci/docker-spark:latest worker2-swarm Shutdown Rejected 45 seconds ago "invalid mount config for type�"
is4dib7wmb61 \_ spark_cluster_master.1 sauloricci/docker-spark:latest worker1-swarm Shutdown Rejected 50 seconds ago "invalid mount config for type�"
y80py4s4hny8 spark_cluster_worker.1 sauloricci/docker-spark:latest manager-swarm Running Running about a minute ago
It seems the swarm just accepted the services running at my swarm manager node and rejected at my worker swarm nodes.
How could I manage to find the logs associated with this scenario? Where could I check the logs related with this deployment if that’s the case? I’d like to know what exactly means the message in the error column.