GitHub Action to apply Kubernetes manifest files in your EKS cluster.
Point to a file or directory, and this action will apply your manifests, monitor the rollout, and fail fast if something goes wrong — without waiting for the full timeout.
name: Build
on:
push:
branches: [ main ]
jobs:
deploy:
runs-on: ubuntu-latest
steps:
- name: Checkout
uses: actions/checkout@v4
- name: Deployment
uses: Pablommr/kubernetes-eks@v2.1.2
env:
AWS_ACCESS_KEY_ID: ${{ secrets.AWS_ACCESS_KEY_ID }}
AWS_SECRET_ACCESS_KEY: ${{ secrets.AWS_SECRET_ACCESS_KEY }}
KUBECONFIG: ${{ secrets.KUBECONFIG }}
KUBE_YAML: path_to_file/file.ymlTo use this action you need an IAM user with permission to apply resources in your EKS cluster. For more information, see the AWS documentation.
Set up the environment variables listed below and point to your manifest files via KUBE_YAML (individual files) or FILES_PATH (directory).
AWS access key ID for the IAM role used to authenticate with the cluster.
AWS secret access key for the IAM role.
Base64-encoded kubeconfig file. The profile name inside the kubeconfig must match AWS_PROFILE_NAME.
At least one of them must be set. Both can be used simultaneously.
KUBE_YAML — path to one or more individual manifest files, separated by commas.
KUBE_YAML: kubernetes/deployment.yml,artifacts/configmap.yaml
FILES_PATH — path to a directory. All .yaml and .yml files directly inside the directory will be applied. Use SUBPATH: true to include subdirectories.
FILES_PATH: kubernetes
AWS credentials profile name to be written to ~/.aws/credentials. Defaults to default.
boolean — default: false
When true, substitutes environment variables inside the manifest files before applying them. Variables must be declared with $ prefix (e.g., $IMAGE_TAG). Useful for injecting dynamic values such as image tags at deploy time.
boolean — default: false
When true and using FILES_PATH, applies manifest files found in subdirectories as well. When false, only files at the top level of FILES_PATH are applied.
boolean — default: false
When true, the action continues processing remaining files even if one apply or rollout fails. The pipeline will still exit with an error code at the end if any failure occurred. When false, the action stops immediately on the first failure.
boolean — default: true
When true, the action watches the rollout status after each apply for resources that manage Pods (Deployment, ReplicaSet, DaemonSet, Pod). The rollout is monitored until it completes successfully, fails, or reaches KUBE_ROLLOUT_TIMEOUT.
If the resource was unchanged by the apply, a kubectl rollout restart is triggered automatically to ensure the latest configuration or image is rolled out.
string — default: 20m
Maximum time to wait for a rollout to complete. Must be in time format: 60s, 5m, 1h. Requires KUBE_ROLLOUT: true.
integer — default: 15
Number of seconds to wait after the rollout reports success before doing a final pod health check. This catches the case where an application crashes shortly after startup: Kubernetes considers the pod "ready" as soon as the container starts, before it has a chance to fail. Set a value higher than your application's startup time to ensure the check is meaningful. Requires KUBE_ROLLOUT: true.
When KUBE_ROLLOUT is enabled, the action handles two important scenarios:
The rollout monitor runs in the background, allowing the action to respond to cancellation signals from the GitHub Actions UI at any point during the rollout. Cancelling the workflow will stop the rollout immediately instead of leaving the step hanging.
The action polls the pod status every 3 seconds while waiting for the rollout. If any pod enters one of the following states, the pipeline fails immediately without waiting for the full timeout:
| State | Cause |
|---|---|
CrashLoopBackOff |
Container is crashing repeatedly on startup |
OOMKilled |
Container was terminated due to memory limit |
ImagePullBackOff |
Docker image could not be pulled |
ErrImagePull |
Error while pulling the Docker image |
Kubernetes marks a pod as "ready" as soon as the container process starts — before the application has time to crash. In this scenario, the rollout finishes successfully while pods silently start crashing in the background.
After kubectl rollout status reports success, the action waits KUBE_STABILITY_WINDOW seconds (default: 15) and re-checks all pods. If any pod is in a failed state at that point, the pipeline fails. Set KUBE_STABILITY_WINDOW to a value greater than your application's startup time to ensure the check is meaningful.
Manifests are applied in the following order to respect Kubernetes resource dependencies:
- Namespace — must exist before any other resource.
- All other resource types (ConfigMap, Service, Ingress, etc.) — applied without rollout monitoring.
- Pod-managing resources (Deployment, ReplicaSet, DaemonSet, Pod) — applied with rollout monitoring when
KUBE_ROLLOUT: true. - ScaledObject (KEDA) — applied last, as it references a Deployment that must already exist.
Let's suppose you need to apply three artifacts in your EKS: one Deployment, one Service, and one ConfigMap. All your Kubernetes manifests are inside the kubernetes folder:
├── README.md
├── app
| └── files
├── kubernetes
│ ├── deployment.yaml
│ ├── envs
│ │ ├── prod
│ │ │ └── configmap.yaml
│ │ └── staging
│ │ └── configmap.yaml
│ └── service.yaml
└── another_files
You want to:
- Apply
deployment.yamlandservice.yamlfrom thekubernetesfolder (not subdirectories). - Apply only the prod
configmap.yamlindividually. - Substitute the image tag dynamically using
ENVSUBST.
In deployment.yaml, declare the image tag as a placeholder:
image: nginx:$IMAGE_TAGThen configure your pipeline:
name: Build
on:
push:
branches: [ main ]
workflow_dispatch:
env:
AWS_PROFILE_NAME: default
AWS_ACCESS_KEY_ID: ${{ secrets.AWS_ACCESS_KEY_ID }}
AWS_SECRET_ACCESS_KEY: ${{ secrets.AWS_SECRET_ACCESS_KEY }}
KUBECONFIG: ${{ secrets.KUBECONFIG }}
jobs:
deploy:
runs-on: ubuntu-latest
needs: build_and_push
steps:
- name: Checkout
uses: actions/checkout@v4
- name: Deploy
uses: Pablommr/kubernetes-eks@v2.1.2
env:
FILES_PATH: kubernetes
KUBE_YAML: kubernetes/envs/prod/configmap.yaml
SUBPATH: false
ENVSUBST: true
KUBE_ROLLOUT: true
KUBE_ROLLOUT_TIMEOUT: 10m
IMAGE_TAG: 1.21.6With FILES_PATH: kubernetes and SUBPATH: false, only deployment.yaml and service.yaml are applied from the directory. The prod ConfigMap is applied separately via KUBE_YAML. The $IMAGE_TAG placeholder in deployment.yaml is replaced with 1.21.6 before applying.
- Rollout cancellation: the pipeline now responds immediately to workflow cancellation from the GitHub Actions UI during a rollout, instead of waiting for the current step to finish.
- CrashLoopBackOff fail fast: if any pod enters a failed state (
CrashLoopBackOff,OOMKilled,ImagePullBackOff,ErrImagePull) during a rollout, the pipeline fails immediately without waiting for the timeout.
- Add to broke pipeline in case of rollout failed
- Add KUBE_ROLLOUT_TIMEOUT option
- Alignment output logs
- Fix KUBE_YAML files
- Fix files validation in SUBPATH
- Fix to get resource name
- Add yq in background
- Added possibilitie to add path (env FILES_PATH) to apply multiple files
- Added env SUBPATH to apply files in supath
- Added env CONTINUE_IF_FAIL to continue applying files in fail case
- Added output on github action page
- Changed strategy to use an image that has already been built with dependencies in public registry kubernetes-eks, decreasing action execution time
- Added otpion to KUBE_ROLLOUT follow the rollout status in Action page
- Fix metacharacter replacement in ENVSUBST
- Project started