Hang on postgres data import with mounted volumes

This problem is manifesting with Postgres, but it seems to be related to Docker for Mac and/or writing to mounted host volumes:

Expected behavior

The following commands should result in an imported postgres DB.

$ docker-compose up db
$ docker-compose exec db bash
root@dbhost:/# psql -U postgres my_dev_db < /home/my_db.dmp

My environment includes a mounted host volume the postgres PGDATA directory.
I’m mounting the host volume ~/my-db-pgdata as PGDATA (i.e. /var/lib/postgresql/data ).
(In addition the exported DB file is ./db-export/my_db.dmp, although that seems to not cause any problem.)
docker-compose.yml:

db:
  image: mdillon/postgis:9.5
  hostname: dbhost
  volumes:
    - ./db-export:/home
    - ~/my-db-pgdata:/var/lib/postgresql/data

Actual behavior

Midway through the import it hangs. The last output I see from psql is

COPY 655740
 setval
--------
 655740
(1 row)

So far I’ve seen it hang for at least 45 minutes; unsure yet if it will ever complete. I’ve killed a couple previous attempts.
It hangs after importing about 1.2 GB, although I don’t know if that’s significant. I have plenty of space available on the host disk.

Does xhyve have limits on the size of data it can write to a host volume, and I’m hitting that?

I’ve verified that the DB import completes successfully when using Docker Toolbox on the same machine, with a VirtualBox VM. An important distinction is in that case I am not mounting a host volume as PGDATA directory, if that matters. Rather it uses the /var/lib/postgresql/data inside the container.

In fact that is the main appeal of Docker for Mac for this particular case: I want my postgres data to persist in a host volume so that I can remove all Docker containers without losing the data. With the VirtualBox VM the postgres container does not have permission to write to the host volume, but the Docker for Mac xhyve VM can.

Information

$ pinata diagnose -u
OS X: version 10.11.5 (build: 15F34)
Docker.app: version v1.11.1-beta13.1
Running diagnostic tests:
[OK]      Moby booted
[OK]      driver.amd64-linux
[OK]      vmnetd
[OK]      osxfs
[OK]      db
[OK]      slirp
[OK]      menubar
[OK]      environment
[OK]      Docker
[OK]      VT-x
Docker logs are being collected into /tmp/20160601-173053.tar.gz
Most specific failure is: No error was detected
Your unique id is: 6DC4D23A-C120-4BAA-A11F-00EE029D7766
Please quote this in all correspondence.
  • Please see above for how to reproduce. I cannot provide the database .dmp file.

  • host distribution and version: OSX 10.11.5

$ docker info
Containers: 1
 Running: 1
 Paused: 0
 Stopped: 0
Images: 1
Server Version: 1.11.1
Storage Driver: aufs
 Root Dir: /var/lib/docker/aufs
 Backing Filesystem: extfs
 Dirs: 26
 Dirperm1 Supported: true
Logging Driver: json-file
Cgroup Driver: cgroupfs
Plugins:
 Volume: local
 Network: bridge null host
Kernel Version: 4.4.11-moby
Operating System: Alpine Linux v3.3
OSType: linux
Architecture: x86_64
CPUs: 2
Total Memory: 1.955 GiB
Name: moby
ID: ...
Docker Root Dir: /var/lib/docker
Debug mode (client): false
Debug mode (server): true
 File Descriptors: 26
 Goroutines: 60
 System Time: 2016-06-01T21:35:05.32240794Z
 EventsListeners: 3
No Proxy: *.local, 169.254/16
Username: skarger
Registry: https://index.docker.io/v1/

SOLVED: The problem was not enough memory available to Docker. On my Docker for Mac installation it defaults to 2 GB, out of the 4 GB total available on my computer.

Bumping the Docker settings up to 4 GB was enough to complete my DB import.

It would be nice if there were a way to know that Docker was hitting its memory limit. Currently whether you get any helpful information depends on what application is running inside the container. If it logs when it does not have enough memory then you are lucky. In my situation it just hung.

1 Like

This sounds almost exactly like the issue I ran into with MariaDB/Percona: Docker Mac hard-locked during mariadb/percona db import

I’m already allocating 6 GB to docker though so I doubt it’s RAM related. Especially since I have a MariaDB VM with only 4GB allocated that can handle the same import.