Docker Postgres Database volume replication and failover implementation in docker swarm environment it is possible or not ?.
Also high availability and scalability it is possible or not ?.
If it possible could me give any solution
kindly as soon as possible
Yes, it is possible to implement volume replication and failover for a Docker Postgres database in a Docker Swarm environment, and to achieve high availability and scalability.
Here’s a sample example:
- Create an NFS server: Set up an NFS server on a separate machine or virtual machine. The NFS server will provide a shared file system that is accessible from all nodes in the Swarm cluster.
- Create a network: Create a network in Docker Swarm that will be used by the Postgres database containers.
$ docker network create --driver overlay --attachable my-network
- Create a volume: Create a volume in Docker Swarm that will be used to store the Postgres database data. The volume will be backed by the NFS file system.
$ docker volume create --driver local \
--opt type=nfs \
--opt o=addr=<NFS-SERVER-IP>,rw \
--opt device=:/path/to/nfs/mount/point \
- Deploy the Postgres database: Deploy the Postgres database to the Swarm cluster, using the volume you created in step 3.
$ docker service create --name my-postgres \
--network my-network \
--mount type=volume,source=my-postgres-data,destination=/var/lib/postgresql/data \
--env POSTGRES_USER=myuser \
--env POSTGRES_PASSWORD=mypassword \
--replicas 1 \
- Test failover: To test failover, stop or remove a node in the Swarm cluster that is running the Postgres database container. The Swarm manager will automatically start a new instance of the container on another node, and the NFS file system will ensure that the data is available to the new container.
For the Prod environment, I found this quite helpful Easy PostgreSQL Cluster Recipe Using Docker 1.12 and Swarm Even though it looks quite old but it should provide you sufficient clue how to achieve the solution.