Docker Mac hard-locked during mariadb/percona db import

Expected behavior

The MariaDB/Percona instance should be able to import an arbitrarily large DB dump file successfully.

Actual behavior

The MariaDB/Percona import hangs, taking the container and all communications with Docker down with them.

Information

Starting a MariaDB or Percona container with:

docker run --name mariadb
-p 3306:3306
-v /Users/alex/Documents/Docker/mariadb/mysql.cnf:/etc/mysql/conf.d
-v /Users/alex/Documents/Docker/mariadb/data:/var/lib/mysql
-e MYSQL_ROOT_PASSWORD=PW
-d mariadb:latest

Then trying to import a compressed SQL file into it via:
pv content.sql.gz | gunzip | mysql -h 127.0.0.1 -u -p

The content.sql.gz file is about 500mb (compressed) and represents a large, but not enormous DB. At some point during the import process everything hangs (I’ve reproduced several times, it hangs at different % points on different runs):
101MiB 0:11:17 [ 0 B/s] [==============> ] 20% ETA 0:45:04

docker ps/logs etc all fail to return. Stopping Docker and restarting it restore the Docker environment, but the container remains fubar.

% pinata diagnose -u

OS X: version 10.11.4 (build: 15E65)
Docker.app: version v1.11.1-beta12
Running diagnostic tests:
[OK] Moby booted
[OK] driver.amd64-linux
[OK] vmnetd
[OK] osxfs
[OK] db
[OK] slirp
[OK] menubar
[OK] environment
[OK] Docker
[OK] VT-x
Error echo “00000003.0000f3a6” | nc -U /Users/alex/Library/Containers/com.docker.docker/Data/@connect > /tmp/20160523-161224/diagnostics.tar: timeout after 30.00s
Docker logs are being collected into /tmp/20160523-161224.tar.gz
Most specific failure is: No error was detected
Your unique id is: 5C53FB17-8701-469B-9FBE-A1B54CC56DC5
Please quote this in all correspondence.

Steps to reproduce the behavior

See description.

1 Like

I’m also experiencing this. Any ideas how to import the sql?

I haven’t been able to find a workaround. Ultimately had to give up on storing the DB data on the host system and instead let the container manage its own storage. Not optimal from my perspective, but at least it works.

I’m also experiencing this. Only happens when trying to store data in a volume, and appears to be when importing a large table, though that may just be coincidence.

I’d love to hear if someone has a solution.

An example dataset that reproduces this issue would be extremely useful in helping us to fix it. Is there any way you could share your data set or easily create one (or share a script to create one) that causes this problem? We’d really like to resolve this issue but don’t have the human bandwidth to attempt a reproduction. Anything you could do to help us reproduce the problem would be extremely appreciated.

Thanks,

David

Thank you for the reply, David. I can provide a dataset.

I also noticed that if I do the import without using volumes, it completes in the expected amount of time. If I then attempt to cp the files out of the container to the host, the larger innodb files (>~2GB) copy very slowly, around 2 to 3MB/s and the process com.docker.hyperkit uses almost all available CPU. Seems like this could be related.

I haven’t yet investigated to see if slow copying of large files is a known issue with Docker for Mac.

It appears this issue is related to file transfer speed, which has already been discussed here: File access in mounted volumes extremely slow, CPU bound

As discussed in File access in mounted volumes extremely slow, CPU bound, I’d still really appreciate a reproduction. One which generates a representative database from a script so we don’t have to transfer gigs over the net would be lovely. In particular, with large sequential reads of large blocks, you should see performance around 250MB/s which should transfer your 2GB file in around 8s. If you see performance significantly worse than that for this use case, either the database software is doing something suboptimal or there is a problem with osxfs which may be addressable in the nearer term than “just make everything go faster”.

Thanks,

David

Solved!
I copied the “dump.sql” file into the container and then imported it.

docker cp dump.sql CONTAINER_ID:/root/dump.sql
docker exec -it CONTAINER_ID bash

then inside the container:

cd /root
mysql -u USER -p DATABASE

and then inside myqsl bash:

SOURCE dump.sql;

Nice! Unfortunately more of a workaround than a fix though. I don’t want to have to copy my DB dump into the container each time I want to do an import though.

I had a similar issue, and tried this workaround, but docker crashed during big dump file copy.
Then I left the file in a shared volume, launched mysql importation inside the container, and it worked:

Inside the container

cd /shared_volume_where_the_dump_is
mysql -u USER -p DATABASE
SOURCE dump.sql