I am running Kubernetes in Docker for Desktop on Mac (v 2.3.0.3 (45519) with Kubernetes v1.16.5).
I have declared a Persistent Volume with a size of 15Gi, but the volume itself when mounted is only 3Gb. Docker is showing that I have only used 25 GB of my allocated 60 GB of storage, so it’s not running out of drive space.
Where is this 3.0GB volume size definition coming from? And how can I make it match what I requested in the PV?
Thanks!
PV & PVC:
---
kind: PersistentVolume
apiVersion: v1
metadata:
name: backend-pv-volume
namespace: coding-dev
labels:
type: local
app: backend
spec:
storageClassName: manual
capacity:
storage: 15Gi
accessModes:
- ReadWriteMany
hostPath:
path: "/mnt/coding-data-dev"
---
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: backend-pv-claim
namespace: coding-dev
labels:
app: backend
spec:
storageClassName: manual
accessModes:
- ReadWriteMany
resources:
requests:
storage: 15Gi
Pod (for testing):
---
apiVersion: v1
kind: Pod
metadata:
name: load-data-pod
namespace: coding-dev
spec:
containers:
- name: load-data-pod
image: pgardella/awscli
command: ["/bin/sh", "-c", "--"]
args: [ "while true; do sleep 30; done;" ]
volumeMounts:
- mountPath: /app/data
name: data-dir
restartPolicy: Never
volumes:
- name: data-dir
persistentVolumeClaim:
claimName: backend-pv-claim
Inside the Pod:
root@load-data-pod:/# df -h
Filesystem Size Used Avail Use% Mounted on
overlay 59G 25G 31G 45% /
tmpfs 64M 0 64M 0% /dev
tmpfs 3.0G 0 3.0G 0% /sys/fs/cgroup
/dev/vda1 59G 25G 31G 45% /etc/hosts
overlay 3.0G 3.0G 0 100% /app/data
shm 64M 0 64M 0% /dev/shm
tmpfs 3.0G 12K 3.0G 1% /run/secrets/kubernetes.io/serviceaccount
tmpfs 3.0G 0 3.0G 0% /proc/acpi
tmpfs 3.0G 0 3.0G 0% /sys/firmware