Data directory "/var/lib/postgresql/data/pgdata" has wrong ownership

Workaround: add the environment variable PGDATA: /tmp to your docker-compose file.

Here’s mine:

version: '2'

services:
  postgres:
    image: postgres:latest
    #tty: true
    ports:
      - "5432:5432"
    volumes:
      - f:/data/docker/postgresql/data:/var/lib/postgresql/data
    environment:
      POSTGRES_USER: postgres
      POSTGRES_PASSWORD: docker
      POSTGRES_DB: db
      PGDATA: /tmp
5 Likes

@walkandride Here, we are mounting /var/lib/postgresql/data but using /tmp as PGDATA directory. This didn’t share the data directory with host system.

volumes:
- f:/data/docker/postgresql/data:/var/lib/postgresql/data
environment:
PGDATA: /tmp

Is there some workaround to deal with this? I am using postgres:9.4 tag, is this issue common for windows across all versions?

1 Like

Hi Vishal,
In your example, “/var/lib/postgresql/data” should be the directory on your host [Windows] system. I don’t think you have this directory in Windows.

In my example, I have a drive F: and map a host directory on this drive “f:/data/docker/postgresql/data” to the postgres data directory /var/lib/postgresql/data. Setting the PGDATA environment variable resolves the fsync errors that I received.

Hope this helps.

  • John

Thank you for quick response. In the snippet I used from your quote, PGDATA points to “/tmp” which would mean PG will use /tmp as data directory inside container. I assume then PG doesn’t use “/var/lib/postgresql/data” directory as data directory.

In that case, I am not very sure if mapping “/var/lib/postgresql/data” container directory to x directory on host system would do any help. Doing this surely resolves all errors, but the “f:/data/docker/postgresql/data” directory on host system doesn’t really reflect the up-to-date data of PG, because it is now residing in /tmp.

As I am still learning docker, am I missing some point here?

Thank you in advance.

1 Like

When you’re using Docker for Windows to volume-mount a Windows drive into a Linux container, that volume is done using a CIFS/Samba network share from the Windows host. For lots of reasons, it’s highly unlikely that Linux Postgres will work correctly when trying to write data to a filesystem backed by NTFS shared with Samba.

Instead, I recommend using a persistent (but local to the Linux VM) named volume as detailed here: Trying to get Postgres to work on persistent windows mount - two issues

1 Like

Hi Vishal,
I’m still learning Docker too. I verified that you are correct. After I did a “docker-compose down”, I no longer had my data.

The response by Michael above provided the correct solution. I updated my docker compose file to be:

version: '2'

# docker volume create --name data -d local

services:
  postgres:
    restart: always
    container_name: postgres_db
    image: postgres:latest
    #tty: true
    ports:
      - "5432:5432"
    volumes:
       - data:/var/lib/postgresql/data
    environment:
      POSTGRES_USER: postgres
      POSTGRES_PASSWORD: docker
      POSTGRES_DB: db
      #PGDATA: /tmp

volumes:
  data:
    external: true

After, creating the “data” volume, I did a docker-compose. Everything initialized fine (no errors). I then created a database and loaded it with data. I then did a docker-compose down followed by a docker-compose up and the data was still there.

The thing that I am unsure of is where on the disk is this data stored. docker volume inspect does not provide any meaningful information.

I had exact issue with postgres:9.4, my manifest file looks quite similar:

version: '2'
services:
  postgres:
    image: "postgres:9.4"
    ports:
     - "65432:5432"
    volumes:
      - pgdata:/var/lib/postgresql/data
volumes:
  pgdata:
    driver: local

when I run docker-compose up I got following errors:

postgres_1  | FATAL:  "/var/lib/postgresql/data" is not a valid data directory
postgres_1  | DETAIL:  File "/var/lib/postgresql/data/PG_VERSION" does not contain valid data.
postgres_1  | HINT:  You might need to initdb.

I confirm that this issue affect version 9.5 and 9.6. The only version that finally get this issue rectified is version 10.

So if you could not afford to use version 10, here is my fix:

version: '2'
services:
  postgres:
    build:
      context: ./
      dockerfile: Dockerfile.postgres
    ports:
     - "65432:5432"
    environment:
      - PGDATA=/var/lib/postgresql/data
    volumes:
      - pgdata:/var/lib/postgresql/data
volumes:
  pgdata:
    driver: local

and the content of Dockerfile.postgres is:

FROM postgres:9.4

RUN mkdir -p "$PGDATA" && chmod 700 "$PGDATA"

The gist is to explicitly specifiy PGDATA and ask docker-compose to use my custom Dockerfile.postgres file to build the image. The RUN rule is very self-explanatory, I create a new folder and change mod to 700. And viola, it works :slight_smile:

2 Likes

One solution to this problem was to create a user:
environment:
POSTGRES_USER: postgres
POSTGRES_PASSWORD: mypass
POSTGRES_DB: db
PGDATA: / tmp
Even so, it received errors in running both creation and migration. When I parse the docker log <db_id> I saw that the authentication problem continued. This way, again passing the “-A” option, both for db: create, and for the db: migrate option I was successful.

docker-compose run app bundle exec db: create -A

docker-compose run app bundle exec db: migrate -A

With this, I did not have to run chmod in version 3 of dockerfile.

db:
    image: postgres
    restart: always
    environment:
      POSTGRES_USER: postgres
      POSTGRES_PASSWORD: docker
      POSTGRES_DB: db
      PGDATA: /tmp
    volumes:
    - ./postgres:/var/lib/postgresql/data
    ports:
    - 5432
1 Like

This is my Dockerfile
FROM microsoft/windowsservercore:10.0.14393.1944 AS download

SHELL [“powershell”, “-Command”, “$ErrorActionPreference = ‘Stop’; $ProgressPreference = ‘SilentlyContinue’;”]

ENV PG_VERSION 10.1-3

*RUN Invoke-WebRequest $(‘https://get.enterprisedb.com/postgresql/postgresql-{0}-windows-x64-binaries.zip’ -f $env:PG_VERSION) -OutFile ‘postgres.zip’ -UseBasicParsing ; *
*Expand-Archive postgres.zip -DestinationPath C:\ ; *
Remove-Item postgres.zip

*RUN Invoke-WebRequest ‘http://download.microsoft.com/download/0/5/6/056DCDA9-D667-4E27-8001-8A0C6971D6B1/vcredist_x64.exe’ -OutFile vcredist_x64.exe ; *
*Start-Process vcredist_x64.exe -ArgumentList ‘/install’, ‘/passive’, ‘/norestart’ -NoNewWindow -Wait ; *
Remove-Item vcredist_x64.exe

FROM microsoft/nanoserver:10.0.14393.1944

COPY --from=download /pgsql /pgsql
COPY --from=download /windows/system32/msvcp120.dll /pgsql/bin/msvcp120.dll
COPY --from=download /windows/system32/msvcr120.dll /pgsql/bin/msvcr120.dll

RUN setx /M PATH “C:\pgsql\bin;%PATH%”

EXPOSE 5432
CMD [“postgres”]
docker run -d -p 5435:5432 --name postgres postgres
it is creating container but not starting the container
please if anybody knows please reply
docker logs postgres
is also not showing any logs too

Dude Thank you, saved my life!!

Unfortunately this is a problem with Docker for Windows, I’ve the same problem with MySQL, and Percona too.

Here is an issue on Github for MySQL confirms the same problem.

Now, I personally have the same problem with a big php project. Files are randomly not accessible.

It seems to me that to this date this problem has not yet been solved!

The solution I found to not lose database data was to create a docker volume and add in the postgres image settings.

My docker-compose.yml:

version: '3'
services:
  postgres:
	container_name: postgres
	restart: always
	build:
		context: ./postgres
		dockerfile: Dockerfile
	volumes:
		- pgdata:/var/lib/postgresql/data
	ports:
		- "5432:5432"
	environment:
		- POSTGRES_USER=${POSTGRES_USER}
		- POSTGRES_PASSWORD=${POSTGRES_PASS}
	networks:
		- code-network
networks:
  code-network:
     driver: bridge
volumes:
  pgdata:
     external: true

Before mounting the containers, you must create the volume manually:

docker volume create --name=pgdata

This worked. How do you pass init files in this case to postgres server?

Thank you very much. you saved the day…

Works like a charm! Saves my day.

Thank you so so much!

awesome!!!
just works!!!

I have this problem on linux with laradock, after moving original data path host to another disk.
To solve that, add environment var PGDATA: /tmp , build, up and bash. Inside container, go to /var/lib/postgresql/ and create data2 folder. Copy all from data > data2 and run chown -R postgres:postgres data2/.
Exit and stop container. Remove PGDATA: /tmp and change docker-compose.yml like this:

    postgres:
      volumes:
        - ${DATA_PATH_HOST}/postgres:/var/lib/postgresql/data2

After that, run build, and up.
And all works fine without loose data.

Specifying the --user flag seems to map the folder with the correct uid and gid.
Check this out, it works here:

  database:
    image: 'postgres:13.2'
    user: 999:999
    volumes:
      - ./volumes/database-data:/var/lib/postgresql/data/pgdata
    environment:
      PGDATA: /var/lib/postgresql/data/pgdata

    # Permissions bug on macos: the volume is mounted as root the first instant, then it's mapped to 999. If we
    # start with postgres directly it complains, if we start it after bash it's fine. It needs to wait at least 2
    # seconds for some f* reason
    entrypoint: bash -lc "sleep 2; /usr/lib/postgresql/13/bin/postgres"

As an addition to @activecod3’s post, I think it’s usefull to be aware what the DockerHub description of the image says regarding --user (when using docker run) or user: (when using docker compose)