When I restart some EC2 VM at Swarm cluster, that VM should be restarted and nothing more should happen.
When I restart some EC2 VM at Swarm cluster, new VM instance are created and launched and my restarted VM become to terminate/deleted.
Some times, those new VM instance can’t be joined at cluster and then in this mess, I deleted all cluster for create a new clean cluster.
Docker for AWS
Docker Version: 17.05 Stable Channel
Some times I using 1 Manager and 3 Workers and some time 3 Manager and 1 Wokers.
VM Size: t2.micro for 3 Workers and t2.small for 1 Manager
And some times:
VM Size: t2.small for 1 Workers and t2.micro for 1 Manager
Why a I need restart a VM instance at cluster?
I don’t know why, but some times, the cluster become too very slow, some docker services can’t response, some services begin try restart, everything become very slow. The ssh shell command at manager stay too slow.
One day, this happen at dawn, and another day I saw the all services was restarted and all cluster was shaked.