Docker Community Forums

Share and learn in the Docker community.

Why when restart EC2 VM at Swarm cluster, some new VMs are created and launch?

Expected behavior

When I restart some EC2 VM at Swarm cluster, that VM should be restarted and nothing more should happen.

Actual behavior

When I restart some EC2 VM at Swarm cluster, new VM instance are created and launched and my restarted VM become to terminate/deleted.

Some times, those new VM instance can’t be joined at cluster and then in this mess, I deleted all cluster for create a new clean cluster.

Additional Information

I’m using:
Docker for AWS
Docker Version: 17.05 Stable Channel
Some times I using 1 Manager and 3 Workers and some time 3 Manager and 1 Wokers.
VM Size: t2.micro for 3 Workers and t2.small for 1 Manager
And some times:
VM Size: t2.small for 1 Workers and t2.micro for 1 Manager

Why a I need restart a VM instance at cluster?

I don’t know why, but some times, the cluster become too very slow, some docker services can’t response, some services begin try restart, everything become very slow. The ssh shell command at manager stay too slow.

One day, this happen at dawn, and another day I saw the all services was restarted and all cluster was shaked.

This discussion might have more details: https://github.com/docker/for-aws/issues/52#issuecomment-315379119

What’s probably happening when you restart instances (which I assume you do in the EC2 VM list) is that AWS ASG decides that the instance being restarted is unhealthy because it’s not responding to health-checks.

You might want to try and detach or set the node in standby in the ASG before restarting it.

Alright, I wil test it, and about slow cluster, I now right now, it’s because I’m using t2.micro VM instance, those instances working CPU credit category, and when my cluster come to work hard, the CPU credit is gonne and I got a slow VM instance for several times.