Docker Community Forums

Share and learn in the Docker community.

Problem allocating disk space within a container through Docker Desktop for Windows

Hi everyone,

I have a problem with a Docker image that I instantiate with the “Docker Desktop” tool on Windows.

I have two images:

  • one for a database server with Mariadb ;
  • the other for a PHP 7 application server.

It seems that I have a problem with the shared disk of the host and more particularly with the space that my programs would like to allocate.

In concrete terms, when I try to create files and fill them in, I have no worries. Everything works.

cat /dev/random > /myshare/test.bin

However, when I try to start MariaDB server, it gives me the following error:
2019-03-11 6:34:23 0[ERROR] InnoDB: preallocating 12582912 bytes for file ./ibdata1 failed with error 95
2019-03-11 6:34:23 0[ERROR] InnoDB: Could not set the file size of ‘./ibdata1’. Probably out of disk space
2019-03-11 6:34:23 0[ERROR] InnoDB: Database creation was aborted with error Generic error. You may need to delete the ibdata1 file before trying to start up again.
2019-03-11 6:34:24 0[ERROR] Plugin “InnoDB” init function returned error.
2019-03-11 6:34:24 0[ERROR] Plugin "InnoDB’ registration as a STORAGE ENGINE failed.
2019-03-11 6:34:24:24 0[ERROR] Unknown/unsupported storage engine: InnoDB
2019-03-11 6:34:24 0[ERROR] Aborting

The sharing with the host is of the CIFS type, when I do a “df -hf”, I notice that I have free space.

Regarding the Apache/PHP application server, I also had a problem with creating the UNIX socket in a “run” folder of my CIFS share. I fixed this problem by internalizing the socket in the “/var/run” folder of the Docker instance.

I don’t understand, can you help me? Have you ever experienced this problem?

It seems to me that I don’t have a problem with disk space and UNIX rights are well positioned on folders (I even tested with a 777 and I still have a problem).

Finally, I would like to point out that I don’t have this problem when I instance my image on a GNU/Linux server.

Thanks a lot,

Is there an issue with the disk where Hyper-V stores the data (Docker Desktop runs in Hyper-V)? You can find this location if you open the Hyper-V Manager, right-click on your host, then ‘Hyper-V Settings’, ‘Virtual Hard Disks’. In my case, everything goes to a folder on drive E:.

…of course you find all this in the settings of Docker Desktop too (under ‘Advanced’).
Next thing to check: Is the max. disk image size too small? Default seems to be 59.6 GB.

Thank for your reply.

I forgot to specify:
I work on MacOS, and to test the deployment of my image on a Windows workstation, I virtualized Windows 10 with VMWare.

Otherwise, actually on Windows 10, the Docker Desktop client uses Hyper-V to create my Docker machine. When I look at the parameters concerning the hard disk, it is a VHDX file and therefore a dynamic size disk. For the moment it is set to 388MB. I have no indicator on the maximum size… :frowning:

Thank you for your help!

I mentioned that Docker Desktop runs in Hyper-V, so you try to run one virtualization technology inside another one. It may be possible that this works (unlike running Hyper-V on AWS), but quite sure not out of the box. Is there anything special with your containers that you want to test them like that?

Yes, I know. But I don’t have any more choice right now. My Docker image is simply to start a PHP web server and another image for a database. Ultimately it is to deploy these images: on Windows workstations for development; on a server for production. I have nothing special about it except that I outsource ports and data folders. My Windows virtual machine under VMWare is nothing special. I have planned 2 vCPUs, 4GB DDR and 80GB HDD.
Finally, the host machine works with MacOS.

One thing to consider: If you mount folders from the Windows host into a container, this is done via a CIFS mount and you miss a big part of the features of a Linux file system. I do a lot of development this way, but for the databases I always use named volumes. Program code mounted from Windows will have the wrong file permissions (0777 or 0755) and if your application watches for changed files, it will never receive a change event. Besides that, there should be no difference to MacOS or Linux.

1 Like

Good evening,
Thank you for your answers. With your comments, I have made some progress. I used the mount points with named volumes instead of the classic volumes. There was indeed a problem of rights on the files created (umask) with CIFS. However, mount points with named volumes do not allow me to outsource data to the Windows host machine! Indeed, outsourcing is done in the Hyper-V VM and in its disk. It’s a pity because it would have been interesting on a development workstation… I’m continuing my investigations.

Thank you.