Getting started with creating a functional EKS cluster from scratch can be challenging as requires some specific settings. While EKS module will create a new cluster, it does not address how you will expose an applicaition, tags required for subnets, number of pod IP addresses etc
EKS cluster using terraform contains everything required for you to spin up a new cluster and expose application via Application Loadbalancer. All you need to do is apply terraform code
CSI driver by default configured to mount secrets as file. However it is possible to mount secrets are Environment variable using below method. When the pod is started,
driver will create a secret and mount as Environment variable. Secrets object only exists while pod is active.
secretName: Name of the secret to be created in k8s
data.objectName: Name of the secretObject/Alias to retrieve data from
key: Name of the key with in k8s secret to be used for storing retrieved data.
Above configuration will create k8s secret called 'myusername' with value of username in key 'username'. k8s secret 'mysecrets' will contain all objects in Mysecrets under k8s secret key 'mysecrets'
Eks cluster configured with Application loadbalancer. During deployments, pods become unhealthy in target group for short while and causes brief outage.
There are 2 possible reasons for this scenario and both must be addressed.
1. ALB taking longer to initialize new pods
2. ALB is slow to detect and drain terminated pods.
Configure Pod readiness Gate to indicate that pod is registered to the ALB/NLB and healthy to receive traffic. This will ensure pod is healthy in target group before terminating old pod.
To enable Pod readiness Gate, add label elbv2.k8s.aws/pod-readiness-gate-inject: enabled to applications Namespace. Change will be effective for any new pod being deployed.
When a pod is terminated, it can take couple of seconds for ALB to pick up the change and start draining connection. By this time, most likely pod already been terminated by K8s.
Solution to this issue is a workaround. Add a lifecycle policy to the pod to ensure pods are de-registered before termination