In our test environment we run a tool in a swarm cluster, that specificly can have only a single replica. It is not cluster-aware in any way. The tool is created as a docker service, with a replica count of one. Whenever the node fails, the container is immediatly scheduled by swarm to run on a different node and is usally usable within a few seconds. Furhter replicas would corrupt the database files (it uses some sort of sqllite database).
Like I allread wrote, for swarm stacks/services, the ingress network is spun accross all nodes and routes the incomming traffic to the target node and container. Regardless on which node you entered the ingress network.
The problem, that your client application needs to know all nodes and must perform a health check (to not accidently send trafic to an unhealty node) can be eliminated by using a loadbalancer (nginx does layer 4 + 7). As far as I am concerned it should be even possible to run a keepalived container on all three nodes, and let it manage the addition virtual ip. This would at least provide a static entry ip into your Docker Swarm cluster. Once the incomming request reaches the cluster, the ingress network takes care of forwarding to the target container on the node it is running on.
What you describe as cluster-aware is a active-passive replica set. Clustered Software usualy allows to run all nodes activly, though most do elect a leader, and forward read/write operations to the leader or at least serve read operations themselvs (Raft, Zookeeper, Paxos). There are not many consensus algorithms that allows all nodes to write (“Blockchain”, Egaliterian Paxos, Hedera Hashgraph) at once. Thus said, if your software does not implement active-passive replia set itself (e.g. mysql does) I am confident that it can’t be run in Docker. If the application itself does support active-passive replica itself, the only thing a virtual ip does, is to prevent incomming traffic/events on the passive instance. Though, but if it still performs tasks and writes into the same filesystem, a data corruption is not unlickely. It realy depends on the application.
If you application already support active-active replica or real cluster mode, then operating it with Docker is feasable (we run Zookeeper, Kafka and Consul clusters in Docker Swarm).