How kubernetes hides away the volumeMounts complexity

4 min read | by Jordi Prats

If we try compare volumeMounts with the actual mounts that we have on a pod using, for example, df it can be quite confusing due to the usage of the overlay filesystem

Let's consider the volumeMounts section of a deploy:

$ kubectl get deploy pet2cattle -o yaml
(...)
          volumeMounts:
          - mountPath: /opt/pet2cattle/conf
            name: config
          - mountPath: /opt/pet2cattle/data
            name: pet2cattle
            subPath: data
          - mountPath: /opt/pet2cattle/lib
            name: pet2cattle
            subPath: lib
          - mountPath: /tmp
            name: tmp-dir
(...)

And compare it with the filesystem we see on the pod:

$ kubectl exec pet2cattle-8475d6697-jbmsm -- df -hP
Filesystem      Size  Used Avail Use% Mounted on
overlay         100G  9.7G   91G  10% /
tmpfs            64M     0   64M   0% /dev
tmpfs           3.9G     0  3.9G   0% /sys/fs/cgroup
/dev/xvda1      100G  9.7G   91G  10% /tmp
shm              64M     0   64M   0% /dev/shm
/dev/xvdcu       20G  2.5G   18G  13% /opt/pet2cattle/lib
tmpfs           3.9G   12K  3.9G   1% /run/secrets/kubernetes.io/serviceaccount
tmpfs           3.9G     0  3.9G   0% /proc/acpi
tmpfs           3.9G     0  3.9G   0% /proc/scsi
tmpfs           3.9G     0  3.9G   0% /sys/firmware

Its obvious that volumeMounts are not mapped directly to a filesystem or a mount. What it is specially misleading here is the /opt/pet2cattle/lib mount, since it should be using the same exact volume for data and conf. Let's check whether the changes are persistent (since we are using a PersistentVolumeClaim) by creating a file into data:

$ kubectl exec -it pet2cattle-8475d6697-jbmsm -- touch /opt/pet2cattle/data/jordi

And then deleting the pod and waiting for the deployment controller to spawn another pod:

$ kubectl delete pod pet2cattle-8475d6697-jbmsm
pod "pet2cattle-8475d6697-jbmsm" deleted
$ kubectl get pods
NAME                        READY   STATUS    RESTARTS   AGE
pet2cattle-8475d6697-2sk4m   1/1     Running   0          8m44s

If we check the data directory we will see that the file is still there:

$ kubectl exec -it pet2cattle-8475d6697-2sk4m -- ls /opt/pet2cattle/data/
votacions  jordi  ticketing

Another test we can make is to create a 1GB file and then check the disk usage. The current disk usage is:

$ cd /opt/pet2cattle/data/
$ df -hP
Filesystem      Size  Used Avail Use% Mounted on
overlay         100G  9.7G   91G  10% /
tmpfs            64M     0   64M   0% /dev
tmpfs           3.9G     0  3.9G   0% /sys/fs/cgroup
/dev/xvda1      100G  9.7G   91G  10% /tmp
shm              64M     0   64M   0% /dev/shm
/dev/xvdcu       20G  2.5G   18G  13% /opt/pet2cattle/lib
tmpfs           3.9G   12K  3.9G   1% /run/secrets/kubernetes.io/serviceaccount
tmpfs           3.9G     0  3.9G   0% /proc/acpi
tmpfs           3.9G     0  3.9G   0% /proc/scsi
tmpfs           3.9G     0  3.9G   0% /sys/firmware

So we can create the file using dd:

$ dd if=/dev/zero of=/opt/pet2cattle/data/test bs=1024k count=1024
1024+0 records in
1024+0 records out
1073741824 bytes (1.1 GB, 1.0 GiB) copied, 2.01133 s, 534 MB/s

And recheck the disk usage:

$ df -hP .
Filesystem      Size  Used Avail Use% Mounted on
overlay         100G  9.7G   91G  10% /
tmpfs            64M     0   64M   0% /dev
tmpfs           3.9G     0  3.9G   0% /sys/fs/cgroup
/dev/xvda1      100G  9.7G   91G  10% /tmp
shm              64M     0   64M   0% /dev/shm
/dev/xvdcu       20G  3.5G   17G  18% /opt/pet2cattle/lib
tmpfs           3.9G   12K  3.9G   1% /run/secrets/kubernetes.io/serviceaccount
tmpfs           3.9G     0  3.9G   0% /proc/acpi
tmpfs           3.9G     0  3.9G   0% /proc/scsi
tmpfs           3.9G     0  3.9G   0% /sys/firmware

So, the deployment is honoring the volumeMounts but the usage of the overlay filesystem hides away the mount complexity.

If we check how it's mounted the overlay we won't be able to make sense out of it but at least we will know how it is achieved:

$ mount
overlay on / type overlay (rw,relatime,lowerdir=/var/lib/docker/overlay2/l/THAPSGDHSJ4P5FST7LLRVBMZMP:/var/lib/docker/overlay2/l/KZCZRMQQG67LZEZ3NLMMGMQTRF:/var/lib/docker/overlay2/l/2BKNYQEDA77RCNI4BUGTXX3RIE:/var/lib/docker/overlay2/l/XR6L2C3YVCAKAE4RDWTLTXOAGU:/var/lib/docker/overlay2/l/G7C6D7VG34SU3BMXBYQ4XH3UA6:/var/lib/docker/overlay2/l/T6TFJZPQXVFGW5HCKR3DM2ASAQ:/var/lib/docker/overlay2/l/HVYYO3QZP4N4QLDDLSCES2YOT2:/var/lib/docker/overlay2/l/BMMEAEMPIXAZZZ6CYGUPOHBJCP:/var/lib/docker/overlay2/l/4KR5ZOBJ4X7UF2RHXFSMKBULDX:/var/lib/docker/overlay2/l/E5KWT5RZZ43JEP67AYYVN4COZB,upperdir=/var/lib/docker/overlay2/40c529bb1108c39b29f49083b39cf7520effc400bba3a58957b739240e8ac783/diff,workdir=/var/lib/docker/overlay2/40c529bb1108c39b29f49083b39cf7520effc400bba3a58957b739240e8ac783/work)
(...)

Posted on 13/04/2021