3 min read
If we have a K3S Kubernetes cluster that we want to create a backup of, we can use the k3s etcd-snapshot, but that's just going to backup the information related to Pods and other Kubernetes objects, it won't backup data that resides outside of the cluster such as disks (PersistentVolumes, emptyDirs, ...), or even it's state.
Having clarified that we are just going to backup some of the data, let's take a look how to do it.
06/05/2022
Read more...4 min read
While trying to deploy Pods we might notice the on the Events section that Pod cannot be scheduled due to a volume node affinity conflict:
$ kubectl describe pod website-365-flask-ampa2-ha-member-1 -n website-365
Name: website-365-flask-ampa2-ha-member-1
Namespace: website-365
Priority: 0
Node: <none>
Labels: (...)
Annotations: (...)
Status: Pending
IP:
IPs: <none>
Controlled By: StatefulSet/website-365-flask-ampa2-ha-member
Init Containers:
(...)
Containers:
(...)
Conditions:
Type Status
PodScheduled False
Volumes:
volume:
Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
ClaimName: volume-website-365-flask-ampa2-ha-member-1
ReadOnly: false
(...)
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal NotTriggerScaleUp 31m (x20835 over 7d19h) cluster-autoscaler pod didn't trigger scale-up: 2 node(s) had taint {pti/role: system}, that the pod didn't tolerate, 1 node(s) had volume node affinity conflict
Normal NotTriggerScaleUp 95s (x46144 over 7d19h) cluster-autoscaler pod didn't trigger scale-up: 1 node(s) had volume node affinity conflict, 2 node(s) had taint {pti/role: system}, that the pod didn't tolerate
Warning FailedScheduling 64s (x2401 over 43h) default-scheduler 0/4 nodes are available: 2 node(s) had taint {pti/role: system}, that the pod didn't tolerate, 2 node(s) had volume node affinity conflict.
27/04/2022
Read more...3 min read
Kubernetes, by default, registers all the Pods and services using the cluster.local DNS zone. At some point we might want to be able to take a look at this zone. Zone transfers are going to be restricted by default:
dnstools# dig axfr cluster.local
; <<>> DiG 9.11.3 <<>> axfr cluster.local
;; global options: +cmd
; Transfer failed.
But if we are using CoreDNS, we can configure it to temporally allow zone transfers to be able to take a look at it
25/04/2022
Read more...2 min read
Storing the terraform state into a S3 bucket with dynamoDB for locking has become the de facto standard for being able to share the state across an organization. Nevertheless, there are interesting alternatives: We can use a Kubernetes Secret
19/04/2022
Read more...5 min read
Using an external metrics provider (Kubernetes 1.10+) we can use an HorizontalPodAutoscaler to automatically scale applications using any metric collected by Prometheus. Let's take a look on how to configure it
05/04/2022
Read more...