I have a bunch of docker containers setup, and the postgres containers always shows these warnings:
2021-11-25 15:18:55.777 UTC [32] WARNING: could not open statistics file “pg_stat_tmp/global.stat”: Operation not permitted, whether using Mac OS or LINUX.
so the database is bound to a local folder, not to a volume.
It seems most likely that the issue is related to a permissions problems, but not sure how best to resolve that.
On linux, the pg_stat_tmp folder has 700 permissions and is owned by systemd-coredump, and files in that directory have 600 permissions and same owner and group.
Please, use code blocks as you did on stackoverflow. I edited you post.
I tried on Linux (Ubuntu 18.04) and MacOS Monterey (M1). I could get the “operation not permitted” message when I mounted the same data folder to multiple postgres container. At first it shoed me an other message then the one you mentioned after I deleted the second instance:
dockertest-postgres_butterfly-1 | 2021-11-27 14:44:16.024 EST [33] LOG: could not rename temporary statistics file "pg_stat_tmp/global.tmp" to "pg_stat_tmp/global.stat": No such file or directory
dockertest-postgres_butterfly-1 | 2021-11-27 14:48:16.073 EST [32] WARNING: could not open statistics file "pg_stat_tmp/global.stat": Operation not permitted
It could be a coincident but for now I couldn’t get this message other way. Have you tried something similar?
Thanks for the reply. On my Mac System (Docker Desktop 4.2 & Catalina) I don’t even have a global.tmp file. The startup log shows:
`2021-11-25 15:06:55.017 UTC [1] LOG: database system is ready to accept connections`
`2021-11-25 15:26:56.084 UTC [55] WARNING: could not open statistics file "pg_stat_tmp/global.stat": Operation not permitted`
`2021-11-25 15:59:57.546 UTC [88] WARNING: could not open statistics file "pg_stat_tmp/global.stat": Operation not permitted`
`2021-11-25 16:07:57.718 UTC [96] WARNING: could not open statistics file "pg_stat_tmp/global.stat": Operation not permitted`
My DB index folders are bound only once in my docker-compose.yml, but they are also shared with the host OS (Catalina) and UBUNTU Linux in production. It does not seem to be causing any operational issues yet, but I would like to kind of figure out how best to deal with it. As I mentioned, I could maybe use a volume for the DB instead of a bound folder, but I would have to reconfigure things, and the volume would have to be persistent. I have other scripts that perform just normal DB backups to a file using mysql or postgres.
This seems to work well, particular for the OrthancStorageButterFly because the files are easily accessible from the host system, and it is easy also to back these up to a network storage device or cloud storage.
That might give some performance improvements and might actually takes care of what appears to be a permissions issue with postgres, but it makes it more difficult to examine the contents of the folder / volume and more difficult to back up. If I wanted to use volumes, how can a backup the volumes to network storage or a cloud service ? Thanks.
I am not a database specialist but I wouldn’t make a backup of the filesystem in case of a database. There are tools like PostgreSQL: Documentation: 10: pg_basebackup to safely backup the data. Does not matter if you use Docker or not. You can then copy the result to a backup server if you want.