Docker container root disk default size (AWS ECS)


I have just updated docker from 1.9.1 to 1.11.2 and noticed that the default size of the OS disk changed. In 1.9.1 I get 100GB (even if the actual disk is smaller), in 1.11.2 it is set to 10GB.

Is that the default behaviour changed? I am running this on AWS ECS, but it does not look like it’s a change they made.

Is this about the standard AWS ECS AMI? AMI version 2015.09.d changed the disk layout, and there’s a comment to this effect at the very bottom of that page. In current versions of the AMI you get an 8 GB root disk, plus a 22 GB EBS disk that is set up for raw-partition devicemapper LVM storage, which is generally considered a better storage setup than the loopback-file devicemapper storage. When you create the instance you can supply your own larger EBS volume and mount it on /dev/xvdcz, or that page also documents how to attach more disk to an existing instance.

@dmaze thanks for the reply.

I am using the attached disk on /dev/xvdcz. In both setups I have a 100GB disk attached. On docker 1.9.1 the containers sees all 100GB, on 1.11.2 only 10GB. In both, docker info prints that 100GB disk space is available.

So, it sounds like, yes, the newer AMI has a smaller root disk, but because the AMI changed to use the attached disk for storage, the smaller root disk doesn’t have much effect on Docker and it can see all of the available space.

But it seems that on my case, once my docker container used 10 GB up, it can not use more disk that are available. Is there a way to increase the size of the default disk storage?

Have you read The default on current AMIs is 22 GB for the attached EBS disk, and that link explains how to add more.

Thank you for your reply. I do read your link before. I have already add more volume, say 100GB on /dev/xvdcz. My problem is that after that, once I run a container, in side the container it still only see 10GB as the default value. And every time the container hit 10 GB, it can not have more storage.
I have been looking this Increase container volume(disk) size to change the default setting but it is still not work.

Am I missing something in between?

@wangyx2005 , as far as I can see, you are not missing anything. Changing to the AWS ECS Optimized Image with docker 1.11.1 results in the default container root volume size dropping from 100G (on 1.9.1) to 10G. I too do not see a way to increase the root volume size for containers since amazon disables the “–storage-opt size=XX” option for docker.

Your best hope is to keep only the guest OS on the root mount (fingers crossed you wont need to install >10G), and keep all of your computation and data in /mnt. Extend the host VM root volume to something like 200GB, and then map an “empty” volume inside the container on /mnt using “host: {}”; all of the remaining capacity on your host will be available inside the container.

Here is a snippet from my task definition showing both a read-only volume and the empty volume:
$ aws ecs describe-task-definition --task-definition mytask
“volumes”: [
“host”: {
“sourcePath”: “/mnt/read_only”
“name”: “read_only_data”
“host”: {},
“name”: “scratch”
“mountPoints”: [
“sourceVolume”: “read_only_data”,
“readOnly”: true,
“containerPath”: “/mnt/ro_data”
“sourceVolume”: “scratch”,
“containerPath”: “/mnt”

does anyone know what the exact run command is that AWS uses in this case using "host": {} ?
I am having some trouble with directory permissions that only happen in the containers that AWS launches for me. When I run the container myself with docker run -it imageName or docker run -it -v /mnt imageName I don’t have any issues.

Hi @wangyx2005

Did you ever solve the issue of increasing the disk size inside container. I am running into exact same issue as yours. I need at least 50-60GB space inside container which Docker is not able provide even if host machine is showing it has plenty more. I can’t use EFS as it is slow for frequent read/writes

[ec2-user@ip-10-10-8-217 ~]$ sudo lsblk
xvda 202:0 0 8G 0 disk
└─xvda1 202:1 0 8G 0 part /
xvdcz 202:26368 0 70G 0 disk
└─xvdcz1 202:26369 0 70G 0 part
├─docker-docker–pool_tmeta 253:0 0 72M 0 lvm
│ └─docker-docker–pool 253:2 0 69.2G 0 lvm
│ ├─docker-202:1-263237-0ceca1135062e9e3bdc166fda4e5825b39fcecc835d7e04f1c25f3fdf223b547
253:3 0 10G 0 dm
│ └─docker-202:1-263237-187930bc9c0e776f4ffbf9c0582b760f985af8971f6f2677521fe49391d8df80
253:4 0 10G 0 dm
└─docker-docker–pool_tdata 253:1 0 69.2G 0 lvm
└─docker-docker–pool 253:2 0 69.2G 0 lvm
253:3 0 10G 0 dm
253:4 0 10G 0 dm

I do see this on the host machine and when I try to run following
[ec2-user@ip-10-10-8-217 ~]$ sudo mount /dev/xvdcz ./imdata2
mount: /dev/xvdcz is already mounted or /home/ec2-user/imdata2 busy

How can I use EBS disk inside container?

Any help is appreciated, Thx in advance.

If somebody else needs to increase the default image base space, it works for me by editing the file /etc/sysconfig/docker-storage adding --storage-opt dm.basesize=100G next to the other parameters.