3 min read
In the same way we can use git blame to identify who and when has modified a specific line, with kubectl blame we'll be able to do the same to Kubernetes objects
13/05/2022
Read more...2 min read
One of the drawbacks of installing k3s on a EC2 instance versus using EKS is that we loose the AWS integration, so we cannot use AWS load balancers by default. Thanks to the AWS cloud provider we can overcome this limitation
10/05/2022
Read more...3 min read
minikube is a great tool for testing: For some activities we might need to access via ssh to the kubernetes nodes, minikube even provides a command to do it so we don't even have to break a sweat
09/05/2022
Read more...3 min read
If we have a K3S Kubernetes cluster that we want to create a backup of, we can use the k3s etcd-snapshot, but that's just going to backup the information related to Pods and other Kubernetes objects, it won't backup data that resides outside of the cluster such as disks (PersistentVolumes, emptyDirs, ...), or even it's state.
Having clarified that we are just going to backup some of the data, let's take a look how to do it.
06/05/2022
Read more...4 min read
While trying to deploy Pods we might notice the on the Events section that Pod cannot be scheduled due to a volume node affinity conflict:
$ kubectl describe pod website-365-flask-ampa2-ha-member-1 -n website-365
Name: website-365-flask-ampa2-ha-member-1
Namespace: website-365
Priority: 0
Node: <none>
Labels: (...)
Annotations: (...)
Status: Pending
IP:
IPs: <none>
Controlled By: StatefulSet/website-365-flask-ampa2-ha-member
Init Containers:
(...)
Containers:
(...)
Conditions:
Type Status
PodScheduled False
Volumes:
volume:
Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
ClaimName: volume-website-365-flask-ampa2-ha-member-1
ReadOnly: false
(...)
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal NotTriggerScaleUp 31m (x20835 over 7d19h) cluster-autoscaler pod didn't trigger scale-up: 2 node(s) had taint {pti/role: system}, that the pod didn't tolerate, 1 node(s) had volume node affinity conflict
Normal NotTriggerScaleUp 95s (x46144 over 7d19h) cluster-autoscaler pod didn't trigger scale-up: 1 node(s) had volume node affinity conflict, 2 node(s) had taint {pti/role: system}, that the pod didn't tolerate
Warning FailedScheduling 64s (x2401 over 43h) default-scheduler 0/4 nodes are available: 2 node(s) had taint {pti/role: system}, that the pod didn't tolerate, 2 node(s) had volume node affinity conflict.
27/04/2022
Read more...