Conversation
Signed-off-by: Wesley Hayutin <weshayutin@gmail.com>
|
No actionable comments were generated in the recent review. 🎉 ℹ️ Recent review info⚙️ Run configurationConfiguration used: Repository: openshift/coderabbit/.coderabbit.yaml Review profile: CHILL Plan: Pro Plus Run ID: 📒 Files selected for processing (1)
✅ Files skipped from review due to trivial changes (1)
WalkthroughAdds OLMv1 end-to-end tests under Changes
Sequence Diagram(s)sequenceDiagram
participant Test as Test Suite
participant API as Kubernetes API
participant CC as ClusterCatalog (Image)
participant CE as ClusterExtension Controller
participant OLM as OLM Resolver
Test->>API: Create Namespace & ServiceAccount
Test->>API: Create ClusterRoleBinding (cluster-admin)
Test->>API: Create ClusterCatalog (from image)
Test->>CC: Poll until Serving=True
Test->>API: Create ClusterExtension (reference catalog)
CE->>OLM: Resolve bundle from catalog
OLM-->>CE: Return bundle manifest
CE->>API: Create operator deployment
Test->>API: Poll ClusterExtension.status (wait Installed=True)
API-->>Test: Installed condition reached
Test->>API: Verify controller-manager pods Running
Test->>API: Verify OADP/Velero CRDs exist
Test->>API: Check no Deprecated conditions
alt Upgrade scenario (if upgradeVersion set)
Test->>API: Patch ClusterExtension to new version
CE->>OLM: Resolve upgraded bundle
OLM-->>CE: Return new bundle manifest
CE->>API: Update operator deployment
Test->>API: Poll for new bundle version & Installed=True
Test->>API: Verify controller-manager Running again
end
Test->>API: Delete ClusterExtension
Test->>API: Wait for deletion
Test->>API: Delete ClusterCatalog
Test->>API: Delete ClusterRoleBinding & ServiceAccount
Estimated code review effort🎯 4 (Complex) | ⏱️ ~50 minutes 🚥 Pre-merge checks | ✅ 7 | ❌ 3❌ Failed checks (2 warnings, 1 inconclusive)
✅ Passed checks (7 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches🧪 Generate unit tests (beta)
Comment |
|
[APPROVALNOTIFIER] This PR is NOT APPROVED This pull-request has been approved by: weshayutin The full list of commands accepted by this bot can be found here. DetailsNeeds approval from an approver in each of these files:Approvers can indicate their approval by writing |
There was a problem hiding this comment.
Actionable comments posted: 7
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@config/manager/kustomization.yaml`:
- Around line 7-8: The checked-in kustomization currently pins newName/newTag to
the ephemeral ttl.sh image (ttl.sh/oadp-operator-7e53a850:1h); replace this with
a stable, non-expiring default (e.g., the official oadp-operator image and a
permanent tag or digest) or remove the ttl.sh newName/newTag entries so the
repository manifest does not reference an expiring image; ensure any temporary
ttl.sh usage is moved into CI/deploy workflows that inject the test image
dynamically rather than committing it to kustomization.yaml.
In `@Makefile`:
- Around line 1052-1057: The cleanup target test-olmv1-cleanup unconditionally
deletes ClusterCatalog $(OLMV1_CATALOG); guard that deletion so we only remove a
catalog the tests created by checking the same creation condition or a creation
marker. Modify test-olmv1-cleanup to only run the $(OC_CLI) delete
clustercatalog $(OLMV1_CATALOG) line when OLMV1_CATALOG_IMAGE is set (or when a
persisted marker like OLMV1_CATALOG_CREATED file/env var exists), and update the
Catalog creation step (the rule that creates the catalog) to set that marker
(e.g., touch a file or export a flag) so cleanup can safely detect it before
deleting.
In `@tests/olmv1/olmv1_install_test.go`:
- Around line 107-122: The current gomega.Eventually loop only checks
pod.Status.Phase == corev1.PodRunning which can false-pass; update the check in
the Eventually closure (the block using kubeClient.CoreV1().Pods(...).List and
iterating pods.Items) to verify readiness instead: for each pod, inspect
pod.Status.Conditions for condition.Type == corev1.PodReady with Status ==
corev1.ConditionTrue (or alternatively fetch the Deployment via
kubeClient.AppsV1().Deployments(namespace).Get and assert the Deployment status
has an Available condition == True / status.AvailableReplicas > 0); apply the
same change to the other occurrence mentioned (lines 213-226) so tests assert
PodReady or Deployment Available rather than just PodRunning.
- Around line 167-181: The current code reads the ClusterExtension via
getClusterExtension and then calls
dynamicClient.Resource(clusterExtensionGVR).Update after changing catalogSpec
(using unstructuredNestedMap/unstructuredSetNestedMap), which can race
controller status updates and yield 409s; instead patch only the
spec.source.catalog fields (or wrap the update in retry.RetryOnConflict) rather
than updating the whole object: construct a minimal merge patch containing
spec.source.catalog.version and spec.source.catalog.upgradeConstraintPolicy and
call dynamicClient.Resource(clusterExtensionGVR).Patch with types.MergePatchType
(add import "k8s.io/apimachinery/pkg/types"), or if you prefer keep Update, wrap
the read/modify/write in retry.RetryOnConflict to retry on conflicts. Ensure
references to getClusterExtension, unstructuredNestedMap,
unstructuredSetNestedMap, and dynamicClient.Resource(clusterExtensionGVR) are
updated accordingly.
In `@tests/olmv1/olmv1_suite_test.go`:
- Around line 110-126: The ClusterRoleBinding name in ensureClusterAdminBinding
only uses saName so it can collide across namespaces; change the naming or
reconcile existing bindings: either make bindingName include the namespace
(e.g., bindingName := saName + "-" + ns + "-cluster-admin") so it's unique per
namespace, or when Create returns AlreadyExists call
kubeClient.RbacV1().ClusterRoleBindings().Get to load the existing
ClusterRoleBinding and update its Subjects (add or replace the ServiceAccount
subject for {Name: saName, Namespace: ns}) and then call Update to persist the
corrected subjects; implement one of these approaches inside
ensureClusterAdminBinding.
- Around line 270-295: The cleanupOrphanedCRDs function is destructive on shared
clusters; change it to be gated behind an explicit opt-in (e.g., a test flag or
env var like TEST_DELETE_ORPHAN_CRDS) or a deterministic dedicated-cluster check
before calling dynamicClient.Resource(crdGVR).Delete, and after issuing Delete
for each CRD found by cleanupOrphanedCRDs poll/wait (using
dynamicClient.Resource(crdGVR).Get in a loop with backoff and timeout) until
apierrors.IsNotFound confirms the resource is fully removed before counting it
as deleted; ensure you still handle non-NotFound errors via logging and skip
deletion when the opt-in flag is not set.
- Around line 315-323: When Create on dynamicClient.Resource(clusterCatalogGVR)
returns apierrors.IsAlreadyExists(err), fetch the existing ClusterCatalog (using
dynamicClient.Resource(clusterCatalogGVR).Get with the same name and ctx) and
validate its image field against the requested image variable; if they differ,
fail the test (gomega.Expect/return error) or update/replace the catalog to
match the requested image instead of silently reusing it. Ensure the check
references the existing object's image path (e.g., status/spec field used for
image in ClusterCatalog) and only set createdCatalog = true and log "Created
ClusterCatalog" when you actually created or successfully reconciled the
resource to the desired image.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
Run ID: 77bf8ce7-890a-4973-98c9-d65314a9d12c
📒 Files selected for processing (5)
Makefileconfig/manager/kustomization.yamltests/olmv1/.gitignoretests/olmv1/olmv1_install_test.gotests/olmv1/olmv1_suite_test.go
| obj, err := getClusterExtension(ctx, clusterExtensionName) | ||
| gomega.Expect(err).NotTo(gomega.HaveOccurred()) | ||
|
|
||
| previousBundleName, previousVersion, _ := getInstalledBundle(obj) | ||
| log.Printf("Current installed bundle: name=%s version=%s", previousBundleName, previousVersion) | ||
|
|
||
| catalogSpec, _, _ := unstructuredNestedMap(obj.Object, "spec", "source", "catalog") | ||
| gomega.Expect(catalogSpec).NotTo(gomega.BeNil()) | ||
| catalogSpec["version"] = upgradeVersion | ||
| catalogSpec["upgradeConstraintPolicy"] = "SelfCertified" | ||
| err = unstructuredSetNestedMap(obj.Object, catalogSpec, "spec", "source", "catalog") | ||
| gomega.Expect(err).NotTo(gomega.HaveOccurred()) | ||
|
|
||
| _, err = dynamicClient.Resource(clusterExtensionGVR).Update(ctx, obj, metav1.UpdateOptions{}) | ||
| gomega.Expect(err).NotTo(gomega.HaveOccurred()) |
There was a problem hiding this comment.
Patch ClusterExtension.spec instead of updating the whole object.
ClusterExtension is controller-managed, so this read/modify/write Update can race status changes and intermittently fail with a 409 conflict. Patch just the version/policy fields, or wrap the update in retry.RetryOnConflict.
Example using a merge patch
- catalogSpec, _, _ := unstructuredNestedMap(obj.Object, "spec", "source", "catalog")
- gomega.Expect(catalogSpec).NotTo(gomega.BeNil())
- catalogSpec["version"] = upgradeVersion
- catalogSpec["upgradeConstraintPolicy"] = "SelfCertified"
- err = unstructuredSetNestedMap(obj.Object, catalogSpec, "spec", "source", "catalog")
- gomega.Expect(err).NotTo(gomega.HaveOccurred())
-
- _, err = dynamicClient.Resource(clusterExtensionGVR).Update(ctx, obj, metav1.UpdateOptions{})
+ patch := []byte(fmt.Sprintf(
+ `{"spec":{"source":{"catalog":{"version":%q,"upgradeConstraintPolicy":"SelfCertified"}}}}`,
+ upgradeVersion,
+ ))
+ _, err = dynamicClient.Resource(clusterExtensionGVR).Patch(
+ ctx,
+ clusterExtensionName,
+ types.MergePatchType,
+ patch,
+ metav1.PatchOptions{},
+ )
gomega.Expect(err).NotTo(gomega.HaveOccurred())Add:
import "k8s.io/apimachinery/pkg/types"📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| obj, err := getClusterExtension(ctx, clusterExtensionName) | |
| gomega.Expect(err).NotTo(gomega.HaveOccurred()) | |
| previousBundleName, previousVersion, _ := getInstalledBundle(obj) | |
| log.Printf("Current installed bundle: name=%s version=%s", previousBundleName, previousVersion) | |
| catalogSpec, _, _ := unstructuredNestedMap(obj.Object, "spec", "source", "catalog") | |
| gomega.Expect(catalogSpec).NotTo(gomega.BeNil()) | |
| catalogSpec["version"] = upgradeVersion | |
| catalogSpec["upgradeConstraintPolicy"] = "SelfCertified" | |
| err = unstructuredSetNestedMap(obj.Object, catalogSpec, "spec", "source", "catalog") | |
| gomega.Expect(err).NotTo(gomega.HaveOccurred()) | |
| _, err = dynamicClient.Resource(clusterExtensionGVR).Update(ctx, obj, metav1.UpdateOptions{}) | |
| gomega.Expect(err).NotTo(gomega.HaveOccurred()) | |
| obj, err := getClusterExtension(ctx, clusterExtensionName) | |
| gomega.Expect(err).NotTo(gomega.HaveOccurred()) | |
| previousBundleName, previousVersion, _ := getInstalledBundle(obj) | |
| log.Printf("Current installed bundle: name=%s version=%s", previousBundleName, previousVersion) | |
| patch := []byte(fmt.Sprintf( | |
| `{"spec":{"source":{"catalog":{"version":%q,"upgradeConstraintPolicy":"SelfCertified"}}}}`, | |
| upgradeVersion, | |
| )) | |
| _, err = dynamicClient.Resource(clusterExtensionGVR).Patch( | |
| ctx, | |
| clusterExtensionName, | |
| types.MergePatchType, | |
| patch, | |
| metav1.PatchOptions{}, | |
| ) | |
| gomega.Expect(err).NotTo(gomega.HaveOccurred()) |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@tests/olmv1/olmv1_install_test.go` around lines 167 - 181, The current code
reads the ClusterExtension via getClusterExtension and then calls
dynamicClient.Resource(clusterExtensionGVR).Update after changing catalogSpec
(using unstructuredNestedMap/unstructuredSetNestedMap), which can race
controller status updates and yield 409s; instead patch only the
spec.source.catalog fields (or wrap the update in retry.RetryOnConflict) rather
than updating the whole object: construct a minimal merge patch containing
spec.source.catalog.version and spec.source.catalog.upgradeConstraintPolicy and
call dynamicClient.Resource(clusterExtensionGVR).Patch with types.MergePatchType
(add import "k8s.io/apimachinery/pkg/types"), or if you prefer keep Update, wrap
the read/modify/write in retry.RetryOnConflict to retry on conflicts. Ensure
references to getClusterExtension, unstructuredNestedMap,
unstructuredSetNestedMap, and dynamicClient.Resource(clusterExtensionGVR) are
updated accordingly.
| func ensureClusterAdminBinding(ctx context.Context, saName, ns string) { | ||
| bindingName := saName + "-cluster-admin" | ||
| crb := &rbacv1.ClusterRoleBinding{ | ||
| ObjectMeta: metav1.ObjectMeta{Name: bindingName}, | ||
| RoleRef: rbacv1.RoleRef{ | ||
| APIGroup: "rbac.authorization.k8s.io", | ||
| Kind: "ClusterRole", | ||
| Name: "cluster-admin", | ||
| }, | ||
| Subjects: []rbacv1.Subject{ | ||
| {Kind: "ServiceAccount", Name: saName, Namespace: ns}, | ||
| }, | ||
| } | ||
| _, err := kubeClient.RbacV1().ClusterRoleBindings().Create(ctx, crb, metav1.CreateOptions{}) | ||
| if apierrors.IsAlreadyExists(err) { | ||
| return | ||
| } |
There was a problem hiding this comment.
Make the ClusterRoleBinding name namespace-specific or reconcile existing subjects.
The binding name only uses saName. If the same service account name is reused in another namespace, Line 124 treats the old binding as reusable even though its subject still points at the previous namespace, so the new run won't grant the intended SA any permissions. Include ns in the binding name or update the existing binding on AlreadyExists.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@tests/olmv1/olmv1_suite_test.go` around lines 110 - 126, The
ClusterRoleBinding name in ensureClusterAdminBinding only uses saName so it can
collide across namespaces; change the naming or reconcile existing bindings:
either make bindingName include the namespace (e.g., bindingName := saName + "-"
+ ns + "-cluster-admin") so it's unique per namespace, or when Create returns
AlreadyExists call kubeClient.RbacV1().ClusterRoleBindings().Get to load the
existing ClusterRoleBinding and update its Subjects (add or replace the
ServiceAccount subject for {Name: saName, Namespace: ns}) and then call Update
to persist the corrected subjects; implement one of these approaches inside
ensureClusterAdminBinding.
| _, err := dynamicClient.Resource(clusterCatalogGVR).Create(ctx, cc, metav1.CreateOptions{}) | ||
| if apierrors.IsAlreadyExists(err) { | ||
| log.Printf("ClusterCatalog %s already exists", name) | ||
| return | ||
| } | ||
| gomega.Expect(err).NotTo(gomega.HaveOccurred()) | ||
| createdCatalog = true | ||
| log.Printf("Created ClusterCatalog %s with image %s", name, image) | ||
| } |
There was a problem hiding this comment.
Validate an existing ClusterCatalog matches the requested image.
On AlreadyExists, this helper just logs and reuses the catalog. A previous run with the same catalogName but a different image will silently point the suite at the wrong catalog contents.
Possible fix
_, err := dynamicClient.Resource(clusterCatalogGVR).Create(ctx, cc, metav1.CreateOptions{})
if apierrors.IsAlreadyExists(err) {
- log.Printf("ClusterCatalog %s already exists", name)
- return
+ existing, getErr := dynamicClient.Resource(clusterCatalogGVR).Get(ctx, name, metav1.GetOptions{})
+ gomega.Expect(getErr).NotTo(gomega.HaveOccurred())
+ existingRef, _, _ := unstructured.NestedString(existing.Object, "spec", "source", "image", "ref")
+ gomega.Expect(existingRef).To(gomega.Equal(image),
+ "ClusterCatalog %s already exists with image %s, expected %s", name, existingRef, image)
+ return
}📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| _, err := dynamicClient.Resource(clusterCatalogGVR).Create(ctx, cc, metav1.CreateOptions{}) | |
| if apierrors.IsAlreadyExists(err) { | |
| log.Printf("ClusterCatalog %s already exists", name) | |
| return | |
| } | |
| gomega.Expect(err).NotTo(gomega.HaveOccurred()) | |
| createdCatalog = true | |
| log.Printf("Created ClusterCatalog %s with image %s", name, image) | |
| } | |
| _, err := dynamicClient.Resource(clusterCatalogGVR).Create(ctx, cc, metav1.CreateOptions{}) | |
| if apierrors.IsAlreadyExists(err) { | |
| existing, getErr := dynamicClient.Resource(clusterCatalogGVR).Get(ctx, name, metav1.GetOptions{}) | |
| gomega.Expect(getErr).NotTo(gomega.HaveOccurred()) | |
| existingRef, _, _ := unstructured.NestedString(existing.Object, "spec", "source", "image", "ref") | |
| gomega.Expect(existingRef).To(gomega.Equal(image), | |
| "ClusterCatalog %s already exists with image %s, expected %s", name, existingRef, image) | |
| return | |
| } | |
| gomega.Expect(err).NotTo(gomega.HaveOccurred()) | |
| createdCatalog = true | |
| log.Printf("Created ClusterCatalog %s with image %s", name, image) | |
| } |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@tests/olmv1/olmv1_suite_test.go` around lines 315 - 323, When Create on
dynamicClient.Resource(clusterCatalogGVR) returns
apierrors.IsAlreadyExists(err), fetch the existing ClusterCatalog (using
dynamicClient.Resource(clusterCatalogGVR).Get with the same name and ctx) and
validate its image field against the requested image variable; if they differ,
fail the test (gomega.Expect/return error) or update/replace the catalog to
match the requested image instead of silently reusing it. Ensure the check
references the existing object's image path (e.g., status/spec field used for
image in ClusterCatalog) and only set createdCatalog = true and log "Created
ClusterCatalog" when you actually created or successfully reconciled the
resource to the desired image.
Signed-off-by: Wesley Hayutin <weshayutin@gmail.com>
There was a problem hiding this comment.
♻️ Duplicate comments (1)
Makefile (1)
1053-1055:⚠️ Potential issue | 🟠 MajorMake
ClusterCatalogdeletion opt-in.Line 1055 still deletes
$(OLMV1_CATALOG)unconditionally. The suite itself only deletes a catalog after it knows it created it (tests/olmv1/olmv1_suite_test.gosetscreatedCatalogonly on successful create, andtests/olmv1/olmv1_install_test.gochecks that flag inAfterAll). Pointing this cleanup target at an existing/shared catalog will remove a resource the tests did not own.🧹 Safer cleanup sketch
+OLMV1_DELETE_CATALOG ?= false + test-olmv1-cleanup: login-required ## Cleanup resources created by OLMv1 tests. $(OC_CLI) delete clusterextension oadp-operator --ignore-not-found=true - $(OC_CLI) delete clustercatalog $(OLMV1_CATALOG) --ignore-not-found=true + `@if` [ "$(OLMV1_DELETE_CATALOG)" = "true" ]; then \ + $(OC_CLI) delete clustercatalog $(OLMV1_CATALOG) --ignore-not-found=true; \ + fi $(OC_CLI) delete clusterrolebinding $(OLMV1_SERVICE_ACCOUNT)-cluster-admin --ignore-not-found=true $(OC_CLI) delete sa $(OLMV1_SERVICE_ACCOUNT) -n $(OLMV1_NAMESPACE) --ignore-not-found=trueIf you want parity with the suite’s ownership check, persist a creation marker and key catalog deletion off that instead of a plain name match.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@Makefile` around lines 1053 - 1055, The Makefile target test-olmv1-cleanup currently unconditionally deletes the ClusterCatalog $(OLMV1_CATALOG); change it to be opt-in by checking a persisted "created" marker (e.g., touch a file when the suite successfully creates the catalog) before running the delete command in the test-olmv1-cleanup target. Specifically, modify the test-olmv1-cleanup target to only run `$(OC_CLI) delete clustercatalog $(OLMV1_CATALOG)` if the marker file exists, and ensure tests that create the catalog (tests/olmv1/olmv1_suite_test.go and olmv1_install_test.go flow) write/remove that marker so ownership is respected instead of deleting a shared catalog by name.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Duplicate comments:
In `@Makefile`:
- Around line 1053-1055: The Makefile target test-olmv1-cleanup currently
unconditionally deletes the ClusterCatalog $(OLMV1_CATALOG); change it to be
opt-in by checking a persisted "created" marker (e.g., touch a file when the
suite successfully creates the catalog) before running the delete command in the
test-olmv1-cleanup target. Specifically, modify the test-olmv1-cleanup target to
only run `$(OC_CLI) delete clustercatalog $(OLMV1_CATALOG)` if the marker file
exists, and ensure tests that create the catalog
(tests/olmv1/olmv1_suite_test.go and olmv1_install_test.go flow) write/remove
that marker so ownership is respected instead of deleting a shared catalog by
name.
ℹ️ Review info
⚙️ Run configuration
Configuration used: Repository: openshift/coderabbit/.coderabbit.yaml
Review profile: CHILL
Plan: Pro Plus
Run ID: b65495fa-dc13-49e1-a8c0-13056be93809
📒 Files selected for processing (1)
Makefile
Signed-off-by: Wesley Hayutin <weshayutin@gmail.com>
|
@weshayutin: The following tests failed, say
Full PR test history. Your PR dashboard. DetailsInstructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository. I understand the commands that are listed here. |
who's clauding here.. false statement. namespaced-scoped operators is the term you are looking for in OLMv0 where oadp-operator/config/manager/manager.yaml Lines 63 to 66 in 7e53a85 |
|
installMode in olmv0 is clearly set in our CSV. oadp-operator/config/manifests/bases/oadp-operator.clusterserviceversion.yaml Lines 467 to 475 in 7e53a85 |
Add OLMv1 lifecycle tests for OADP operator
Summary
tests/olmv1/) that validates OADP installs, runs, and upgrades correctly via OLM v1'sClusterExtensionAPI on OCP 4.18+ clustersmake test-olmv1andmake test-olmv1-cleanupMakefile targets with full parameterization viaOLMV1_*variableswatchNamespaceforOwnNamespaceinstall mode, catalog selector targeting, orphaned CRD cleanup, andSelfCertifiedupgrade constraint policy for dev buildsBackground
OLM v1 replaces the classic OLM (
Subscription/ClusterServiceVersion) model with a singleClusterExtensionresource. OADP usesOwnNamespaceinstall mode, which requires OLMv1-specific configuration (spec.config.inline.watchNamespace) that has no equivalent in OLMv0. These tests verify that OADP installs correctly under OLMv1 and catch regressions in bundle compatibility.Reference: OCP 4.21 Extensions Documentation
Test cases
ClusterExtension, waits forInstalled=True, verifies bundle name/versionRunningphaseDeprecated,PackageDeprecated,ChannelDeprecated,BundleDeprecatedare allFalseupgradeConstraintPolicy: SelfCertified, verifies bundle version changes. Skipped when--upgrade-versionis not setDesign decisions
k8s.io/client-go/dynamicinstead of importing operator-controller Go types, avoiding dependency conflicts.watchNamespace-- OADP only supportsOwnNamespaceinstall mode. OLMv1 requiresspec.config.inline.watchNamespaceset to the installation namespace; without it, install fails withInvalidConfiguration.ClusterExtensionusesspec.source.catalog.selector.matchLabelsto target only that catalog, preventing resolution ambiguity with default Red Hat catalogs.BeforeAlldeletes pre-existing*.oadp.openshift.ioand*.velero.ioCRDs from prior OLMv0 installs or test runs. OLMv1 cannot adopt CRDs it did not create.Progressingreasons (InvalidConfiguration,Failed) instead of waiting the full 10-minute timeout.Usage
Prerequisites
TechPreviewNoUpgradefeature set enabled (forwatchNamespacesupport)make undeploy-olmfirst if needed; the test cleans up orphaned CRDs but not OLMv0 Subscriptions/CSVs)Test plan
make test-olmv1passes on a clean OCP 4.22 cluster with a custom catalog imagemake test-olmv1passes on back-to-back runs (cleanup + re-run is idempotent)InvalidConfigurationinstead of timing outmake test-olmv1withOLMV1_UPGRADE_VERSIONset (requires a catalog with multiple versions)make test-olmv1withOLMV1_PACKAGE=redhat-oadp-operatoragainst a productized catalogSummary by CodeRabbit