Replies: 1 comment 1 reply
-
|
That sounds fine to me- both models of deployment are supported, and we provide the above recommendations to admins. Can we go deeper into what "managing nearly all resources" means? The vision I originally had for K8TRE is that Argo CD manages most infrastructure apart from workspaces. e.g. in the JupyterHub control plane model JupyterHub is responsible for creating/destroying workspaces (whether they're containers, VMs, or something else). Is this still what you're thinking, or are you also thinking that ArgoCD manages individual workspaces/VMs? |
Beta Was this translation helpful? Give feedback.
1 reply
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
-
K8TRE will follow GitOps principles and will use ArgoCD for managing nearly all resources on the cluster(s) it manages. The alternative to ArgoCD is FluxCD which LSC-SDE currently uses.
With ArgoCD, there are two broad approaches. ArgoCD can deploy and manage applications to the same cluster it is installed on (in-cluster) or deploy applications to an arbitrary number of clusters with fine-grained control over what applications are deployed to which cluster(s) and with what config.
When this is combined with ArgoCD generators and Kustomize overlays, we can achieve all of the following:
Dev/Stg/Prod Clusters
graph LR subgraph Management Cluster A[ArgoCD Management Instance] end subgraph Dev Cluster B[Dev Workloads] end subgraph Staging Cluster C[Staging Workloads] end subgraph Production Cluster D[Production Workloads] end A --> B & C & D subgraph Git Repository E[Git Repository] end A -- "Syncs with" --> E class ManagementCluster cluster;A K8TRE per project (The Turing Model)
graph LR subgraph Management Cluster A[ArgoCD Management Instance] end subgraph Production Cluster 1 B[K8TRE-Project-01 Workloads] end subgraph Production Cluster 2 C[K8TRE-Project-02 Workloads] end subgraph Production Cluster 3 D[K8TRE-Project-03 Workloads] end A --> B & C & D subgraph Git Repository E[Git Repository] end A -- "Syncs with" --> EPros and Cons of ArgoCD on same cluster vs on separate management cluster
1. ArgoCD on the Same Cluster as Applications (Co-located)
Pros:
Cons:
2. ArgoCD on a Separate Cluster (Federated/Multi-Cluster)
Pros:
Cons:
Recommendations Based on Scenario:
carefully monitor resource usage and plan for redundancy.
is strongly recommended. The increased resilience, security, and scalability are worth the added complexity. Consider using a service mesh (like Istio) to simplify cross-cluster
communication.
Beta Was this translation helpful? Give feedback.
All reactions