Purpose: Provision an Amazon EKS cluster designed for dynamic, cost-efficient capacity using Karpenter, while keeping a minimal managed node group to bootstrap the cluster and Karpenter itself.
- EKS control plane and core AWS resources managed in this Terraform module
- Karpenter deployed as a separate Terraform module, consuming cluster outputs
- Bottlerocket used as the operating system
- Minimal managed node group to solve the Karpenter bootstrap (chicken-and-egg) problem
- EKS Access Entries used for IAM-based cluster access
- Karpenter node pools can be targeted using taints/tolerations (addons included as an example)
Karpenter is intentionally deployed outside of the core EKS module.
Why:
- Clear separation between cluster lifecycle and node provisioning
- Safer upgrades and experimentation with Karpenter
- Reduced blast radius when changing scaling or instance-selection logic
This module exposes only the required outputs (cluster name, endpoint, OIDC provider, IAM roles), which are consumed by the Karpenter module.
Karpenter runs as a Kubernetes controller and depends on core system components being scheduled first (CoreDNS, VPC CNI, kube-proxy, and the Karpenter controller itself).
This creates a chicken-and-egg problem: nodes are required before Karpenter can create nodes.
A very small managed node group is created to:
- Bootstrap the cluster
- Run critical system pods
- Host the Karpenter controller
This node group:
- Is intentionally minimal (e.g. 1–2 on-demand nodes)
- Is not intended for application workloads
- Can be reduced or removed once Karpenter is stable (optional)
After bootstrap, application capacity is expected to be provided by Karpenter.
Karpenter provisions worker nodes using Bottlerocket, an AWS-maintained, container-optimized operating system.
Benefits:
- Minimal attack surface (no SSH, immutable filesystem)
- Predictable configuration and updates
- Well suited for ephemeral, replaceable nodes
This reinforces the idea that nodes are disposable infrastructure.
Cluster access is managed using EKS Access Entries, replacing manual management of the aws-auth ConfigMap.
Advantages:
- Declarative and auditable access control in Terraform
- IAM-native authentication and authorization
- Easier to enforce least-privilege access
Access entries are defined for administrators and automation roles (e.g. CI/CD).
Karpenter allows defining multiple provisioners (logical node pools) with different:
- Instance types
- Capacity types (On-Demand / Spot)
- Labels and taints
Example usage:
- Create a Karpenter provisioner for a specific workload class (e.g.
system,spot,high-memory) - Use taints and tolerations so selected workloads — including addons if desired — run only on those nodes
The addon tolerations referenced in this project are an example of how to target Karpenter node pools, not a strict requirement.
- Immutable infrastructure: nodes are replaceable
- Separation of concerns: cluster vs node lifecycle
- Least privilege: scoped IAM roles and access entries
- Minimal static capacity: bootstrap only; scale dynamically with Karpenter
| Name | Version |
|---|---|
| terraform | >= 1.5.7 |
| aws | >= 6.23 |
| Name | Version |
|---|---|
| aws | 6.28.0 |
| Name | Source | Version |
|---|---|---|
| eks | terraform-aws-modules/eks/aws | 21.15.1 |
| vpc_cni_irsa | terraform-aws-modules/iam/aws//modules/iam-role-for-service-accounts | 6.3.0 |
| Name | Type |
|---|---|
| aws_subnets.pvt_eks_subnets | data source |
| aws_vpc.vpc | data source |
No inputs.
| Name | Description |
|---|---|
| cluster_endpoint | Cluster Endpoint |
| cluster_name | Cluster Name |
| name | The ARN of the OIDC Provider |