2 min read
To be able to collect access logs it might be just more convenient to be able to enable them at the load balancer level rather than having to aggregate logs from all the backend services. If we are using an AWS ALB we can configure it to push it's logs to an S3 bucket
29/04/2022
Read more...4 min read
While trying to deploy Pods we might notice the on the Events section that Pod cannot be scheduled due to a volume node affinity conflict:
$ kubectl describe pod website-365-flask-ampa2-ha-member-1 -n website-365
Name: website-365-flask-ampa2-ha-member-1
Namespace: website-365
Priority: 0
Node: <none>
Labels: (...)
Annotations: (...)
Status: Pending
IP:
IPs: <none>
Controlled By: StatefulSet/website-365-flask-ampa2-ha-member
Init Containers:
(...)
Containers:
(...)
Conditions:
Type Status
PodScheduled False
Volumes:
volume:
Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
ClaimName: volume-website-365-flask-ampa2-ha-member-1
ReadOnly: false
(...)
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal NotTriggerScaleUp 31m (x20835 over 7d19h) cluster-autoscaler pod didn't trigger scale-up: 2 node(s) had taint {pti/role: system}, that the pod didn't tolerate, 1 node(s) had volume node affinity conflict
Normal NotTriggerScaleUp 95s (x46144 over 7d19h) cluster-autoscaler pod didn't trigger scale-up: 1 node(s) had volume node affinity conflict, 2 node(s) had taint {pti/role: system}, that the pod didn't tolerate
Warning FailedScheduling 64s (x2401 over 43h) default-scheduler 0/4 nodes are available: 2 node(s) had taint {pti/role: system}, that the pod didn't tolerate, 2 node(s) had volume node affinity conflict.
27/04/2022
Read more...5 min read
When we change the location of any terraform module we need to run terraform init again to be able to pick up the right version:
$ terraform plan
Acquiring state lock. This may take a few moments...
Releasing state lock. This may take a few moments...
╷
│ Error: Module source has changed
│
│ on main.tf line 17, in module "terraform-module":
│ 17: source = "git::ssh://git@github.com/pet2cattle/terraform-module.git?ref=1.0.2"
│
│ The source address was changed since this module was installed. Run "terraform init" to install all modules required by this configuration.
╵
26/04/2022
Read more...3 min read
Kubernetes, by default, registers all the Pods and services using the cluster.local DNS zone. At some point we might want to be able to take a look at this zone. Zone transfers are going to be restricted by default:
dnstools# dig axfr cluster.local
; <<>> DiG 9.11.3 <<>> axfr cluster.local
;; global options: +cmd
; Transfer failed.
But if we are using CoreDNS, we can configure it to temporally allow zone transfers to be able to take a look at it
25/04/2022
Read more...2 min read
To be able to learn about new terraform functions we can use terraform output to learn how a variable is modified. But this can take a while if we have a lot of resources to compute.
Instead, if we know the values we want to use beforehand it might be easier and quicker to use terraform console
22/04/2022
Read more...3 min read
If we are using terraform for creating subnets on AWS we are going to need to split the VPC's network range into several pieces, one for each AZ. We can let terraform handle all the details by using the cidrsubnet() function
20/04/2022
Read more...2 min read
Storing the terraform state into a S3 bucket with dynamoDB for locking has become the de facto standard for being able to share the state across an organization. Nevertheless, there are interesting alternatives: We can use a Kubernetes Secret
19/04/2022
Read more...2 min read
Hard coding values is never a good idea, using the aws_ami datasource we can query AWS to fetch the latest AMI available, or any AMI really, as long as we properly set the filters so than just one AMI is selected.
06/04/2022
Read more...5 min read
Using an external metrics provider (Kubernetes 1.10+) we can use an HorizontalPodAutoscaler to automatically scale applications using any metric collected by Prometheus. Let's take a look on how to configure it
05/04/2022
Read more...3 min read
When trying to build container images on Kubernetes we might be tempted to use the Docker in Docker approach: To do this you'll need to:
This approach is considered a security risk and it should be avoided.
As alternative, we can use kaniko: It is a tool to build container images inside containers (hence, Kubernetes clusters)
04/04/2022
Read more...