2 min read
To be able to collect access logs it might be just more convenient to be able to enable them at the load balancer level rather than having to aggregate logs from all the backend services. If we are using an AWS ALB we can configure it to push it's logs to an S3 bucket
29/04/2022
Read more...4 min read
While trying to deploy Pods we might notice the on the Events section that Pod cannot be scheduled due to a volume node affinity conflict:
$ kubectl describe pod website-365-flask-ampa2-ha-member-1 -n website-365
Name: website-365-flask-ampa2-ha-member-1
Namespace: website-365
Priority: 0
Node: <none>
Labels: (...)
Annotations: (...)
Status: Pending
IP:
IPs: <none>
Controlled By: StatefulSet/website-365-flask-ampa2-ha-member
Init Containers:
(...)
Containers:
(...)
Conditions:
Type Status
PodScheduled False
Volumes:
volume:
Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
ClaimName: volume-website-365-flask-ampa2-ha-member-1
ReadOnly: false
(...)
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal NotTriggerScaleUp 31m (x20835 over 7d19h) cluster-autoscaler pod didn't trigger scale-up: 2 node(s) had taint {pti/role: system}, that the pod didn't tolerate, 1 node(s) had volume node affinity conflict
Normal NotTriggerScaleUp 95s (x46144 over 7d19h) cluster-autoscaler pod didn't trigger scale-up: 1 node(s) had volume node affinity conflict, 2 node(s) had taint {pti/role: system}, that the pod didn't tolerate
Warning FailedScheduling 64s (x2401 over 43h) default-scheduler 0/4 nodes are available: 2 node(s) had taint {pti/role: system}, that the pod didn't tolerate, 2 node(s) had volume node affinity conflict.
27/04/2022
Read more...3 min read
If we are using terraform for creating subnets on AWS we are going to need to split the VPC's network range into several pieces, one for each AZ. We can let terraform handle all the details by using the cidrsubnet() function
20/04/2022
Read more...2 min read
Hard coding values is never a good idea, using the aws_ami datasource we can query AWS to fetch the latest AMI available, or any AMI really, as long as we properly set the filters so than just one AMI is selected.
06/04/2022
Read more...2 min read
If we are using the archive_file datasource to zip some Lambda function to be able to push it to AWS, we need to se the source_code_hash with it's hash to make sure the function gets updated when it changes:
01/04/2022
Read more...