Install longhorn on a K3S cluster

3 min read | by Jordi Prats

Longhorn is a highly available persistent storage for Kubernetes. It implements distributed block storage using containers and microservices creating a dedicated storage controller for each block device volume and synchronously replicating the volume across multiple replicas stored on multiple nodes. It might sound intimidating, but it's very straightforward to install

On each node we will need to install some dependencies:

  • open-iscsi: for the Longhorn DaemonSet
  • jq: for the install script

So, on Ubuntu we just need to run:

apt-get install open-iscsi jq -y

Once ready, we can run the script as follows to make sure everything is in place:

# curl | bash
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
100  2794  100  2794    0     0  39914      0 --:--:-- --:--:-- --:--:-- 39914
daemonset.apps/longhorn-environment-check created
waiting for pods to become ready (0/1)
waiting for pods to become ready (0/1)
all pods ready (1/1)

  MountPropagation is enabled!

cleaning up...
daemonset.apps "longhorn-environment-check" deleted
clean up complete

If that looks good we can proceed creating the values.yaml. It's safe to go with the defaults, so we just need to enable the ingress if we want to enable it's web interface:

  enabled: true
  host: longhorn.local

We can install the helm chart as follows:

helm repo add longhorn
helm repo update
helm install longhorn longhorn/longhorn --namespace longhorn-system --create-namespace -f values.yaml

We'll have to wait for a while to let Kubernetes create all the required pods:

$ kubectl get pods -n longhorn-system
NAME                                        READY   STATUS    RESTARTS   AGE
csi-attacher-75588bff58-6njjt               1/1     Running   0          16m
csi-attacher-75588bff58-v5fqv               1/1     Running   0          16m
csi-attacher-75588bff58-xb5pp               1/1     Running   0          16m
csi-provisioner-669c8cc698-dt9wd            1/1     Running   0          16m
csi-provisioner-669c8cc698-gmtvm            1/1     Running   0          16m
csi-provisioner-669c8cc698-rlx4b            1/1     Running   0          16m
csi-resizer-5c88bfd4cf-98sbp                1/1     Running   0          16m
csi-resizer-5c88bfd4cf-jltt5                1/1     Running   0          16m
csi-resizer-5c88bfd4cf-mgwhj                1/1     Running   0          16m
csi-snapshotter-69f8bc8dcf-6kbrn            1/1     Running   0          16m
csi-snapshotter-69f8bc8dcf-rr2xj            1/1     Running   0          16m
csi-snapshotter-69f8bc8dcf-zhx2w            1/1     Running   0          16m
engine-image-ei-d4c780c6-77mcf              1/1     Running   0          17m
engine-image-ei-d4c780c6-phgbg              1/1     Running   0          17m
engine-image-ei-d4c780c6-z7bzd              1/1     Running   0          17m
instance-manager-e-1b82056b                 1/1     Running   0          17m
instance-manager-e-45737e61                 1/1     Running   0          16m
instance-manager-e-9a755162                 1/1     Running   0          17m
instance-manager-r-6b4a827c                 1/1     Running   0          8m45s
instance-manager-r-758bd997                 1/1     Running   0          6m48s
longhorn-csi-plugin-nwvjn                   2/2     Running   0          16m
longhorn-csi-plugin-wbs5h                   2/2     Running   0          16m
longhorn-csi-plugin-wmqxm                   2/2     Running   0          16m
longhorn-driver-deployer-75f68555c9-kqp5q   1/1     Running   0          17m
longhorn-manager-d4455                      1/1     Running   0          17m
longhorn-manager-dk7tl                      1/1     Running   0          17m
longhorn-manager-knxsk                      1/1     Running   0          17m
longhorn-ui-75ccbd4695-nftq7                1/1     Running   0          17m

Once ready we can check that there's a longhorn StorageClass available:

$ kubectl get sc
local-path (default)   Delete          WaitForFirstConsumer   false                  9d
longhorn           Delete          Immediate              true                   3d

To start using it we can either set it as the default or explicitly set it on the PVC:

$ kubectl get pvc -n minio
NAME                            STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS   AGE
data0-minio-pet2cattle-ss-0-0   Bound    pvc-6137ddaf-6f07-491e-b217-a33328d2d4a3   20Gi       RWO            longhorn       6m7s
data1-minio-pet2cattle-ss-0-0   Bound    pvc-c1638ade-5b0e-4023-b981-333f95d6888f   20Gi       RWO            longhorn       6m7s
data2-minio-pet2cattle-ss-0-0   Bound    pvc-0e288e7d-2e22-454f-96c0-c7122df87c44   20Gi       RWO            longhorn       6m7s
data3-minio-pet2cattle-ss-0-0   Bound    pvc-adad20f8-1fb4-4d9d-8c5a-d36e2a5a5c97   20Gi       RWO            longhorn       6m7s

Posted on 13/12/2021