3 min read | by Jordi Prats
On a AWS EKS cluster, at the time of this writing, by default you cannot resize volumes provisioned with the default gp2 StorageClass. This is because on the default StorageClass the allowVolumeExpansion is set to false, preventing the volume expansion:
$ kubectl get sc
NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE
gp2 (default) kubernetes.io/aws-ebs Delete WaitForFirstConsumer false 78d
To be able to fix this, we can check that on the default definitions the nallowVolumeExpansion setting is not even present:
$ kubectl get sc gp2 -o yaml
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
(...)
name: gp2
parameters:
fsType: ext4
type: gp2
provisioner: kubernetes.io/aws-ebs
reclaimPolicy: Delete
volumeBindingMode: WaitForFirstConsumer
To be able to modify this StorageClass we can use kubectl patch as follows:
$ kubectl patch sc gp2 -p '{"allowVolumeExpansion": true}'
storageclass.storage.k8s.io/gp2 patched
This setting will tell the Kubernetes CSI that the underlying volume (a EBS Volume) can be resized. Using kubectl get sc we can check how it changed ALLOWVOLUMEEXPANSION to true:
$ kubectl get sc
NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE
gp2 (default) kubernetes.io/aws-ebs Delete WaitForFirstConsumer true 78d
We can see the current size of the volume using kubectl get pv or, by simply checking the volume size on the pod using df:
$ kubectl get pv
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
pvc-a1448f38-5f28-492e-a09c-8a900b9fb43e 35Gi RWO Delete Bound pet2cattle/pet2cattle-static gp2 9d6h
$ kubectl exec -it pet2cattle-79979695b-7rmg6 -- df -hP
Filesystem Size Used Avail Use% Mounted on
overlay 20G 11G 9.5G 53% /
tmpfs 64M 0 64M 0% /dev
tmpfs 3.9G 0 3.9G 0% /sys/fs/cgroup
/dev/xvda1 20G 11G 9.5G 53% /tmp
shm 64M 0 64M 0% /dev/shm
/dev/xvdbg 35G 1.3G 34G 4% /opt/pet2cattle/static
tmpfs 3.9G 12K 3.9G 1% /run/secrets/kubernetes.io/serviceaccount
tmpfs 3.9G 0 3.9G 0% /proc/acpi
tmpfs 3.9G 0 3.9G 0% /proc/scsi
tmpfs 3.9G 0 3.9G 0% /sys/firmware
Bear in mind that identifying VolumeMounts with linux mounts on the container can be a complex task.
Finally, to resize the volume we can change the storage property to the new size we want to resize it to:
$ kubectl get pvc pet2cattle-data -o yaml | sed 's/storage: 35Gi/storage: 40Gi/g' | kubectl apply -f -
persistentvolumeclaim/pet2cattle-data configured
It can take a while for the CSI to make the changes but eventually we will be able to see how the volume have been resized:
$ kubectl get pv
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
pvc-a1448f38-5f28-492e-a09c-8a900b9fb43e 40Gi RWO Delete Bound pet2cattle/pet2cattle-static gp2 9d6h
$ kubectl exec -it pet2cattle-79979695b-7rmg6 -- df -hP
Filesystem Size Used Avail Use% Mounted on
overlay 20G 11G 9.5G 53% /
tmpfs 64M 0 64M 0% /dev
tmpfs 3.9G 0 3.9G 0% /sys/fs/cgroup
/dev/xvda1 20G 11G 9.5G 53% /tmp
shm 64M 0 64M 0% /dev/shm
/dev/xvdbg 40G 1.3G 38G 4% /opt/pet2cattle/static
tmpfs 3.9G 12K 3.9G 1% /run/secrets/kubernetes.io/serviceaccount
tmpfs 3.9G 0 3.9G 0% /proc/acpi
tmpfs 3.9G 0 3.9G 0% /proc/scsi
tmpfs 3.9G 0 3.9G 0% /sys/firmware
Posted on 10/05/2021