-
Notifications
You must be signed in to change notification settings - Fork 988
Open
Labels
kind/bugCategorizes issue or PR as related to a bug.Categorizes issue or PR as related to a bug.needs-triageIndicates an issue or PR lacks a `triage/foo` label and requires one.Indicates an issue or PR lacks a `triage/foo` label and requires one.
Description
What happened:
When I ran:
kubectl scale deployment -n test my-pod --replicas 3 --dry-run=client -o yamlor
kubectl scale deployment -n test my-pod --replicas 3 --dry-run=client -o yamlthe generated manifest did match the number of replicas desired:
spec:
progressDeadlineSeconds: 600
replicas: 1What you expected to happen:
The generated YAML should include the spec.replicas expected after the apply at least in --dry-run=server because kubectl sends a PATCH request to the API server and the API server returns the patched object:
spec:
progressDeadlineSeconds: 600
replicas: 3How to reproduce it (as minimally and precisely as possible):
- Create a namespace:
kubectl create ns test- Create a deployment in that namespace:
kubectl create deployment my-pod --image=nginx -n test- Generate the new deployment manifest with --dry-run=client:
kubectl scale deployment -n test my-pod --replicas=3 --dry-run=client -o yamlor
kubectl scale deployment -n test my-pod --replicas=3 --dry-run=server -o yaml- Inspect the output:
The generated YAML does not include the right number of replicas.
Anything else we need to know?:
I faced this issue while preparing for the CKAD with kubectl v1.34.1
Environment:
- Kubernetes client and server versions (use
kubectl version): v1.34.1 - v1.34.1 - Cloud provider or hardware configuration: Local
- OS (e.g:
cat /etc/os-release): Ubuntu 24.04
Metadata
Metadata
Assignees
Labels
kind/bugCategorizes issue or PR as related to a bug.Categorizes issue or PR as related to a bug.needs-triageIndicates an issue or PR lacks a `triage/foo` label and requires one.Indicates an issue or PR lacks a `triage/foo` label and requires one.