Docker was working and now the service won't start on any hosts that were in the swarm

I had a swarm running on 3 hosts. I removed the service so that I could update it using:

docker service rm servicename…

I started the service on one of the nodes, but got a warning about not having the credentials to the repository, so different nodes might end up running different versions. I immediately removed the service again. Once that happened, the docker service on 2 of the nodes crashed. I’ve rebooted the servers and tried manually starting the service. The service will not stay running. Here’s the pertinent info along with what I see in the event log:

OS Name: Microsoft Windows Server Datacenter
OS Version: 10.0.17134 N/A Build 17134
Docker version 18.03.1-ee-3, build b9a5c95

Eventlog:
Windows default isolation mode: process
Loading containers: start.
Restoring existing overlay networks from HNS into docker
Loading containers: done.
Docker daemon [version=18.03.1-ee-3 commit=b9a5c95 graphdriver(s)=windowsfilter]
Listening for connections [module=node node.id=m1iw1gs7f7olccvrt7ka76r6m proto=tcp addr=192.168.214.13:2377]
Listening for local connections [proto=pipe addr=\.\pipe\control.sock module=node node.id=m1iw1gs7f7olccvrt7ka76r6m]
manager selected by agent for new session: {m1iw1gs7f7olccvrt7ka76r6m 192.168.214.13:2377} [module=node/agent node.id=m1iw1gs7f7olccvrt7ka76r6m]

After that the service just stops on both machines. Has anyone seen this before? Is there a way to force it to ignore the swarm it was previously in and just start stand-alone?

We ended up deleting everything under C:\ProgramData\Docker and got the docker service to run again.