Automated/cron backups of Postgres database?

I have a Rails app running in Docker containers–one container for the app itself and one for the Postgres database that uses Docker volumes for storage. I deploy them on EC2 with an Ansible playbook. I have a database backup script I’d like to run as a cron job twice a day. Here’s my script:

#!/bin/bash
# Location to place backups, relative to the app's docker container.
docker_backup_dir="/postgres-backups"

# directory on the host filesystem where backups will be written
mounted_backup_volume="/home/vagrant/postgres-backups"

# String to append to the name of the backup files
backup_date=`date +%Y-%m-%d__%H%M__UTC%z`

# Numbers of days you want to keep copies of your databases
number_of_days=30

databases=`docker exec postgres psql -l -t -U postgres | cut -d'|' -f1 | sed -e 's/ //g' -e '/^$/d'`
for i in $databases; do
  if [ "$i" != "template0" ] && [ "$i" != "template1" ]; then
    echo Dumping $i to $docker_backup_dir/$i\_$backup_date
    docker run -it --link postgres:postgres \
    -v $mounted_backup_volume:$docker_backup_dir \
    --rm postgres:9.4 \
    sh -c "exec pg_dump -F c -v -w -U postgres -h \$POSTGRES_PORT_5432_TCP_ADDR ${i} > ${docker_backup_dir}/${i}_${backup_date}"
  fi
done

find $mounted_backup_volume -type f -prune -mtime +$number_of_days -exec rm -f {} \;

Basically, my intent is, for each database, to have it spin up a new container that links to the existing database container, run pg_dump, and then remove the container.

My problem is that my Ansible playbook sets a password for the postgres user (which seemed like a good idea), but I don’t know how to run the backups from a cron script without having to put the password in the script itself, which seems like a bad idea.

Is there a more Docker-like way to go about this?

I’m not an expert either in DevOps things or in Docker ecosystem, but you certainly can set required authentication keys in your shell from where cron is starting, or in some file.

How are you getting the credentials (password) into the PostgreSQL container in the first place?

If it’s through Docker API environment variables (e.g. docker run -e) you’re already forfeiting the credentials to anyone with Docker API access (which should only be root / sudoers) – just check the output of docker inspect dbcontainer – there the environment variables are available in cleartext. So, I’d suggest that you simply make the file where the password is stored (cron entry, etc.) is only readable and writeable by superuser and/or docker group. Then just invoke your container like you mentioned.

You could also just docker exec into the existing container and run the dump in a separate process. But, anyone who can invoke docker commands likely has access to that password already. Protect Docker access first and foremost. In the future this might be easier with some kinds of built-in secrets management for Docker, but today you are mostly limited by the existing Unix / Linux admin security model.