3 min read | by Jordi Prats
If we have a K3S Kubernetes cluster that we want to create a backup of, we can use the k3s etcd-snapshot, but that's just going to backup the information related to Pods and other Kubernetes objects, it won't backup data that resides outside of the cluster such as disks (PersistentVolumes, emptyDirs, ...), or even it's state.
Having clarified that we are just going to backup some of the data, let's take a look how to do it.
Let's assume we have this K3S cluster with the demo Pod that we are going to use to check whether has been restored:
# kubectl get pods -A
NAMESPACE NAME READY STATUS RESTARTS AGE
default demo 0/1 Pending 0 23s
kube-system coredns-96cc4f57d-btbhl 0/1 Pending 0 93m
kube-system local-path-provisioner-84bb864455-kjtnf 0/1 Pending 0 93m
kube-system metrics-server-ff9dbcb6c-9d2lb 0/1 Pending 0 93m
We are going to use a S3 bucket to store the backups, to do so we need to specify the backup and path we want it stored:
# k3s etcd-snapshot --s3 --s3-bucket=k3s-backup --etcd-s3-folder=backups --etcd-s3-region=us-west-2
INFO[0000] Managed etcd cluster bootstrap already complete and initialized
INFO[0000] Applying CRD addons.k3s.cattle.io
INFO[0000] Applying CRD helmcharts.helm.cattle.io
INFO[0000] Applying CRD helmchartconfigs.helm.cattle.io
INFO[0000] Saving etcd snapshot to /var/lib/rancher/k3s/server/db/snapshots/on-demand-i-012758202ee027822-1651769574
{"level":"info","ts":"2022-05-05T16:52:54.773Z","caller":"snapshot/v3_snapshot.go:65","msg":"created temporary db file","path":"/var/lib/rancher/k3s/server/db/snapshots/on-demand-i-012758202ee027822-1651769574.part"}
{"level":"info","ts":"2022-05-05T16:52:54.776Z","logger":"client","caller":"v3@v3.5.4-k3s1/maintenance.go:211","msg":"opened snapshot stream; downloading"}
{"level":"info","ts":"2022-05-05T16:52:54.777Z","caller":"snapshot/v3_snapshot.go:73","msg":"fetching snapshot","endpoint":"https://127.0.0.1:2379"}
{"level":"info","ts":"2022-05-05T16:52:54.799Z","logger":"client","caller":"v3@v3.5.4-k3s1/maintenance.go:219","msg":"completed snapshot read; closing"}
{"level":"info","ts":"2022-05-05T16:52:54.804Z","caller":"snapshot/v3_snapshot.go:88","msg":"fetched snapshot","endpoint":"https://127.0.0.1:2379","size":"1.5 MB","took":"now"}
{"level":"info","ts":"2022-05-05T16:52:54.804Z","caller":"snapshot/v3_snapshot.go:97","msg":"saved","path":"/var/lib/rancher/k3s/server/db/snapshots/on-demand-i-012758202ee027822-1651769574"}
INFO[0000] Saving etcd snapshot on-demand-i-012758202ee027822-1651769574 to S3
INFO[0000] Checking if S3 bucket k3s-backup exists
INFO[0000] S3 bucket k3s-backup exists
INFO[0000] Uploading snapshot /var/lib/rancher/k3s/server/db/snapshots/on-demand-i-012758202ee027822-1651769574 to S3
INFO[0000] S3 upload complete for on-demand-i-012758202ee027822-1651769574
INFO[0000] Reconciling etcd snapshot data in k3s-etcd-snapshots ConfigMap
INFO[0000] Reconciliation of snapshot data in k3s-etcd-snapshots ConfigMap complete
We can check with k3s etcd-snapshot ls that the backup have been completed:
# k3s etcd-snapshot ls
Name Location Size Created
on-demand-i-012758202ee027822-1651769574 file:///var/lib/rancher/k3s/server/db/snapshots/on-demand-i-012758202ee027822-1651769574 1499168 2022-05-05T16:52:54Z
Even though the we can see a local file, it has been also uploaded to the S3 bucket:
$ aws s3 ls s3://k3s-backup --recursive
2022-05-05 18:52:55 1499168 backups/on-demand-i-012758202ee027822-1651769574
To be able to install a new K3S on a different server and restore from the backup we have on the S3 bucket, we will have to use the --cluster-reset and --cluster-reset-restore-path specifying the backup, together with all the other --etcd-s3 options to set the backup we want to restore from:
# curl -sfL https://get.k3s.io | INSTALL_K3S_EXEC="server" sh -s - \
--etcd-s3 " \
--etcd-s3-bucket k3s-backup \
--etcd-s3-folder backups \
--etcd-s3-region eu-west-1 \
--cluster-reset \
--cluster-reset-restore-path=backups/on-demand-i-012758202ee027822-1651769574
As soon as the first start completes, k3s will require you remove the cluster-reset options from it's command line unit file, to do so we can use the following sed command:
# sed -e '/--cluster-reset/d' -i /etc/systemd/system/k3s.service
After this we just need to reload it's unit file and restart the service:
# systemctl daemon-reload
# systemctl restart k3s
Once the k3s service starts, we will be able to see how the Pod has been restored:
# kubectl get pods -A
NAMESPACE NAME READY STATUS RESTARTS AGE
default demo 0/1 Pending 0 2m48s
kube-system coredns-96cc4f57d-btbhl 0/1 Pending 0 95m
kube-system local-path-provisioner-84bb864455-kjtnf 0/1 Pending 0 95m
kube-system metrics-server-ff9dbcb6c-9d2lb 0/1 Pending 0 95m
Posted on 06/05/2022