Welcome to the Argo Workflows setup repository! π This guide will walk you through installing Argo Workflows on your existing Kubernetes cluster and setting up authentication with the provided role bindings and service accounts.
graph TB
subgraph "Kubernetes Cluster"
subgraph "Argo Namespace"
AW[Argo Workflows]
AC[Argo Controller]
AS[Argo Server]
end
subgraph "PostgreSQL Namespace"
PG[(PostgreSQL<br/>Workflow Persistence)]
end
subgraph "MinIO Namespace"
MINIO[(MinIO<br/>Artifact Storage)]
end
subgraph "Ingress"
ING[NGINX Ingress<br/>HTTPS/HTTP]
end
subgraph "Logging"
FB[Fluent Bit<br/>Log Collection]
GL[Graylog<br/>Centralized Logs]
end
end
subgraph "External"
USER[π€ Users]
REG[π³ Docker Registry]
end
USER --> ING
ING --> AS
AS --> AC
AC --> PG
AC --> MINIO
AC --> REG
FB --> GL
AW --> FB
style AW fill:#e1f5fe
style PG fill:#e8f5e8
style MINIO fill:#fff3e0
style GL fill:#f3e5f5
style USER fill:#ffebee
This comprehensive Argo Workflows setup includes:
- π Argo Workflows: Complete workflow orchestration with UI and API
- ποΈ PostgreSQL: High-availability database for workflow persistence
- πͺ£ MinIO: S3-compatible object storage for artifacts and logs
- π HTTPS Security: SSL/TLS encryption for secure access
- π Centralized Logging: Fluent Bit integration with Graylog
- π RBAC Authentication: Admin and read-only user access controls
- π Example Workflows: Ready-to-use CronWorkflow templates
- βοΈ Production Ready: Scalable, secure, and maintainable configuration
- π― Prerequisites
- β‘ Quick Start
- ποΈ PostgreSQL Setup
- πͺ£ MinIO Setup
- βοΈ Workflow Controller Configuration
- π§ Detailed Installation Steps
- π Authentication Setup
- π Accessing the UI
- π HTTPS Ingress Configuration
- π Fluent Bit Logging
- π Example Workflows
- π Troubleshooting
- π File Structure
Before you begin, ensure you have:
- β A running Kubernetes cluster (v1.21+)
- β
kubectlconfigured and connected to your cluster - β Cluster admin privileges
- β NGINX Ingress Controller (for HTTP access)
# 1. Set your desired Argo Workflows version
export ARGO_WORKFLOWS_VERSION="v3.7.2" # or latest stable
# 2. Install Argo Workflows
kubectl create namespace argo
kubectl apply --server-side --kustomize "https://github.com/argoproj/argo-workflows/manifests/base/crds/full?ref=${ARGO_WORKFLOWS_VERSION}"
kubectl -n argo apply -f "https://github.com/argoproj/argo-workflows/releases/download/${ARGO_WORKFLOWS_VERSION}/install.yaml"
# 3. Set up PostgreSQL and MinIO (see sections below)
# 4. Configure workflow controller (see section below)
# 5. Apply authentication setup
kubectl apply -f argo-login-admin.yml
kubectl apply -f argo-login-user-readonly.yml
kubectl apply -f https-ingress/http-ingress-forTest.yml
# 6. Get your admin token
kubectl -n argo get secret argo-admin-token -o=jsonpath='{.data.token}' | base64 --decode
β οΈ Important: For a complete setup, you must configure PostgreSQL and MinIO before using Argo Workflows in production!
Argo Workflows requires PostgreSQL for workflow persistence and state management. This setup uses CloudNativePG operator for high availability.
# Install CloudNativePG operator
helm repo add cnpg https://cloudnative-pg.github.io/charts
helm repo update
kubectl create namespace postgres-operator
helm install cnpg cnpg/cloudnative-pg -n postgres-operatorPostgreSQL uses the local-path storage class for persistent volumes. Install it if not already available:
# Install local-path storage provisioner
kubectl apply -f https://raw.githubusercontent.com/rancher/local-path-provisioner/master/deploy/local-path-storage.yaml
# Verify storage class is available
kubectl get storageclassStorage Class Details:
- Name:
local-path - Type: Local storage provisioner
- Use Case: Development and testing environments
- Persistence: Data stored on local node filesystem
- Performance: Good for single-node or small clusters
- Backup: Requires manual backup procedures
β οΈ Production Note: For production environments, consider using cloud storage classes (e.g.,gp2,gp3for AWS,standardfor GCP) or enterprise storage solutions for better durability and performance.
# Apply PostgreSQL configuration
kubectl apply -f postgres-cluster.yml# Check cluster status
kubectl get cluster -n argo-postgres
# Check pods
kubectl get pods -n argo-postgres
# Check services
kubectl get svc -n argo-postgresIf the database and user are not automatically created, connect to PostgreSQL and run these commands:
# Connect to PostgreSQL (replace with your actual pod name)
kubectl exec -it argo-postgres-1 -n argo-postgres -- psql -U argo-superuser -d postgres
# Run the following SQL commands:CREATE USER "argoUser" WITH PASSWORD '9f8e7d6c5b4a3A2B1C0DQWErtyUIop';
GRANT ALL PRIVILEGES ON DATABASE argo_workflow TO "argoUser";
GRANT ALL PRIVILEGES ON ALL TABLES IN SCHEMA public TO "argoUser";
GRANT ALL PRIVILEGES ON ALL SEQUENCES IN SCHEMA public TO "argoUser";
GRANT ALL PRIVILEGES ON ALL FUNCTIONS IN SCHEMA public TO "argoUser";
GRANT USAGE ON SCHEMA public TO "argoUser";
GRANT CREATE ON SCHEMA public TO "argoUser";
ALTER DATABASE argo_workflow OWNER TO "argoUser";- Cluster Name:
argo-postgres - Namespace:
argo-postgres - Instances: 3 (for high availability)
- Storage: 5Gi per instance
- Database:
argo_workflow - User:
argoUser - Password:
9f8e7d6c5b4a3A2B1C0DQWErtyUIop
MinIO provides S3-compatible object storage for Argo Workflows artifacts and logs.
# Install MinIO operator
helm repo add minio https://operator.min.io/
helm repo update
kubectl create namespace minio-operator
helm install minio-operator minio/operator -n minio-operatorFor local storage setup, you need to create directories on your nodes and generate persistent volumes:
# Create storage directories on each node
sudo mkdir -p /var/argo-workflow-minio/disk1
sudo mkdir -p /var/argo-workflow-minio/disk2
sudo chmod 777 /var/argo-workflow-minio/disk1
sudo chmod 777 /var/argo-workflow-minio/disk2
# Generate persistent volumes (adjust nodes in script if needed)
cd minio-cluster/
chmod +x pvs-generator.sh
./pvs-generator.sh# Apply MinIO configuration using Kustomize
kubectl apply -k minio-cluster/# Check tenant status
kubectl get tenant -n argo-minio
# Check pods
kubectl get pods -n argo-minio
# Check services
kubectl get svc -n argo-minioMinIO is exposed via NodePort services:
- MinIO API:
http://<node-ip>:32000 - MinIO Console:
http://<node-ip>:32443 - Credentials:
- Username:
argoworkflow - Password:
9f8e7d6c5b4a3A2B1C0DQWErtyUIop
- Username:
- Tenant Name:
argo-minio - Namespace:
argo-minio - Pools: 1 pool with 3 servers, 2 volumes each
- Storage: 1Gi per volume (6Gi total)
- Bucket:
argo-artifacts(auto-created) - Storage Class:
minio-local-storage
Configure the Argo Workflows controller to use PostgreSQL for persistence and MinIO for artifacts.
# Update the workflow controller configmap
kubectl patch configmap workflow-controller-configmap -n argo --patch-file workflow-controller-configmap.ymlThe configuration includes:
Artifact Repository (MinIO):
- Endpoint:
argo-minio-hl.argo-minio.svc.cluster.local:9000 - Bucket:
argo-artifacts - Insecure:
true(for internal cluster communication) - Credentials: Uses
minio-creds-secret
Persistence (PostgreSQL):
- Host:
argo-postgres-rw.argo-postgres.svc.cluster.local - Port:
5432 - Database:
argo_workflow - Table:
argo_workflow - Credentials: Uses
argo-postgres-secret
To store workflow job logs in MinIO (S3-compatible storage), you need to uncomment the artifact repository configuration in the workflow controller configmap:
# Edit the workflow controller configmap
kubectl edit configmap workflow-controller-configmap -n argoIn the data.config section, uncomment these lines:
artifactRepository:
archiveLogs: true
s3:
endpoint: "argo-minio-hl.argo-minio.svc.cluster.local:9000"
bucket: argo-artifacts
insecure: true
accessKeySecret:
name: minio-creds-secret
key: accesskey
secretKeySecret:
name: minio-creds-secret
key: secretkeyBenefits of enabling log archiving:
- β Workflow logs are stored in MinIO for long-term retention
- β Logs remain accessible even after pods are deleted
- β Centralized log storage for better monitoring and debugging
- β Reduced storage pressure on PostgreSQL database
Note: This is optional but recommended for production environments where you need to retain workflow execution logs for auditing and troubleshooting purposes.
# Restart the workflow controller to apply new configuration
kubectl rollout restart deployment/workflow-controller -n argoFirst, choose your Argo Workflows version from the GitHub releases page:
export ARGO_WORKFLOWS_VERSION="v3.7.2" # Replace with your desired version
β οΈ Important: Always use a specific release version, never use 'latest' for production!
Create the Argo namespace and install the Custom Resource Definitions:
# Create namespace
kubectl create namespace argo
# Install full CRDs using server-side apply (recommended for v3.7+)
kubectl apply --server-side --kustomize "https://github.com/argoproj/argo-workflows/manifests/base/crds/full?ref=${ARGO_WORKFLOWS_VERSION}"
# Install controller and server manifests
kubectl -n argo apply -f "https://github.com/argoproj/argo-workflows/releases/download/${ARGO_WORKFLOWS_VERSION}/install.yaml"Argo Workflows supports three installation modes:
- π Cluster Mode (default): Argo watches workflows cluster-wide
- π¦ Namespace Mode: Argo only executes workflows in one namespace
- π― Managed Namespace: Controller/server in
argo, executes in another namespace
The default installation uses Cluster Mode. For namespace isolation, modify the manifests accordingly.
For production environments, consider these scaling recommendations:
# Scale controller (typically 1-2 replicas for HA)
kubectl -n argo scale deployment workflow-controller --replicas=2
# Scale argo-server (2-3 replicas recommended)
kubectl -n argo scale deployment argo-server --replicas=3π‘ Tip: Use PodDisruptionBudgets and set resource requests/limits for stable scheduling.
Verify your installation:
# Check pods are running
kubectl -n argo get pods
# Check deployments, services, and ingress
kubectl -n argo get deploy,svc,ingress
# Test connectivity (replace with your domain)
curl -i -k https://your-argo-domain.com/This repository provides two authentication configurations:
Full administrative access to Argo Workflows:
# Apply admin configuration
kubectl apply -f argo-login-admin.yml
# Get admin token
kubectl -n argo get secret argo-admin-token -o=jsonpath='{.data.token}' | base64 --decodeAdmin Permissions:
- β Full access to all Argo resources
- β Create, read, update, delete workflows
- β Manage workflow templates
- β Access all namespaces
Read-only access for regular users:
# Apply user configuration
kubectl apply -f argo-login-user-readonly.yml
# Get user token
kubectl -n argo get secret argo-admin-token -o=jsonpath='{.data.token}' | base64 --decodeUser Permissions:
- β Read-only access to Argo resources
- β View workflows and logs
- β Monitor workflow status
- β Cannot create or modify workflows
Comprehensive admin rules with all possible permissions for advanced use cases:
# Apply enhanced admin rules (optional - for advanced scenarios)
kubectl apply -f argo-admin-rules-allPossibleValues.ymlEnhanced Admin Permissions:
- β Full access to all Argo resources
- β Pod management (create, exec, logs)
- β ConfigMap access
- β PersistentVolumeClaim management
- β Workflow finalizers and task sets
- β CronWorkflow management
- β Event creation and patching
- β PodDisruptionBudget management
For HTTP access (testing/development):
kubectl apply -f https-ingress/http-ingress-forTest.ymlFor HTTPS access (production):
kubectl apply -f https-ingress/https-ingress.ymlThe HTTP ingress exposes the Argo Workflows UI on port 80, while the HTTPS ingress provides secure access with SSL certificates.
-
Get your token (admin or user):
# For admin kubectl -n argo get secret argo-admin-token -o=jsonpath='{.data.token}' | base64 --decode # For user kubectl -n argo get secret argo-user-token -o=jsonpath='{.data.token}' | base64 --decode
-
Access the UI:
- Open your browser and navigate to your cluster's IP or domain
- When prompted for authentication, select Bearer Token
- Paste your token in the format:
Bearer <your-token-here>
You should now see the Argo Workflows UI with your assigned permissions.
For production environments, secure HTTPS access is recommended. The repository includes HTTPS ingress configurations with SSL certificate support.
# Apply HTTPS ingress with SSL certificates
kubectl apply -f https-ingress/https-ingress.ymlHTTPS Ingress Features:
- Domain:
workflow-dev.eample.co(configurable) - SSL/TLS: Automatic SSL redirect and HTTPS backend
- Certificate: Uses
eample-co-tlssecret - Security: Force SSL redirect enabled
HTTP Test Ingress:
- Purpose: Development and testing
- Protocol: HTTP only (no SSL)
- Access: Available on all node IPs on port 80
To use HTTPS ingress, you need to create an SSL certificate secret:
# Create SSL certificate secret (replace with your certificate)
kubectl create secret tls eample-co-tls \
--cert=path/to/your/certificate.crt \
--key=path/to/your/private.key \
-n argoFluent Bit is configured to collect and forward Argo Workflows logs to a centralized logging system (Graylog).
# Apply Fluent Bit configuration
kubectl apply -f fluent-bit.ymlFluent Bit Features:
- Log Collection: Collects logs from all Argo Workflows pods
- Output: Forwards logs to Graylog via GELF protocol
- Target:
10.36.18.28:3333(configurable) - Format: GELF format with structured logging
- Scope: Only Argo-related logs (
*_argo_*.log)
Log Processing:
- Input: Container logs from
/var/log/containers/*_argo_*.log - Parser: CRI format parser for container logs
- Filter: Kubernetes metadata enrichment
- Output: GELF format to Graylog server
# Check Fluent Bit pods
kubectl get pods -n kube-system | grep fluent-bit
# Check Fluent Bit logs
kubectl logs -n kube-system daemonset/fluent-bit
# Check Fluent Bit configuration
kubectl get configmap fluent-bit-config -n kube-system -o yamlThe repository includes example workflow templates and configurations to help you get started with Argo Workflows.
A comprehensive example showing how to create scheduled workflows:
# Apply example workflow
kubectl apply -f example-workflow/example-workflow-eample.ymlAPI Configuration (api-config.yml):
- Laravel application environment variables
- Database connections (MySQL)
- Redis configuration
- Mail settings
- External service integrations
OAuth Keys (oauth-private.yml, oauth-public.yml):
- RSA private/public key pairs for OAuth authentication
- Used for secure API authentication
Docker Registry Secret (bcr-eample-secret.yml):
- Private Docker registry credentials
- Required for pulling private container images
CronWorkflow Configuration:
- Schedule: Configurable cron expression (
* * * * *for every minute) - Timezone: Asia/Tehran (configurable)
- Concurrency: Allow multiple concurrent executions
- History: Keeps 5 successful and 5 failed job histories
- Deadline: 15-minute execution timeout
Container Configuration:
- Image: Private registry image with authentication
- Command: PHP artisan commands
- Environment: Timezone and application variables
- Volumes: ConfigMaps for configuration and secrets
- Retry Strategy: 4 retries with exponential backoff
Logging Integration:
- Labels:
workflow-groupandworkflow-typefor log identification - Context:
log-context: argo-workflow-devfor Graylog filtering
- Update Schedule: Modify the cron expression in
spec.schedule - Change Image: Update the container image in
spec.workflowSpec.templates[].container.image - Modify Commands: Update the command and args for your specific tasks
- Adjust Resources: Set appropriate resource limits and requests
- Update Labels: Change workflow labels for proper log categorization
π Pods not starting:
kubectl -n argo describe pods
kubectl -n argo logs deployment/workflow-controller
kubectl -n argo logs deployment/argo-serverπ Authentication issues:
# Check if secrets exist
kubectl -n argo get secrets | grep argo
# Verify service accounts
kubectl -n argo get serviceaccounts | grep argo
# Check role bindings
kubectl get clusterrolebindings | grep argoπ Ingress not working:
# Check ingress status
kubectl -n argo get ingress
# Verify NGINX ingress controller
kubectl get pods -n ingress-nginxποΈ PostgreSQL issues:
# Check cluster status
kubectl get cluster -n argo-postgres
# Check PostgreSQL pods
kubectl get pods -n argo-postgres
# Check PostgreSQL logs
kubectl logs -n argo-postgres deployment/argo-postgres
# Test database connection
kubectl exec -it argo-postgres-1 -n argo-postgres -- psql -U argo-superuser -d postgres -c "\l"πͺ£ MinIO issues:
# Check tenant status
kubectl get tenant -n argo-minio
# Check MinIO pods
kubectl get pods -n argo-minio
# Check MinIO logs
kubectl logs -n argo-minio deployment/argo-minio
# Check persistent volumes
kubectl get pv | grep minio
# Check storage directories on nodes
sudo ls -la /var/argo-workflow-minio/βοΈ Workflow Controller issues:
# Check configmap
kubectl get configmap workflow-controller-configmap -n argo -o yaml
# Check controller logs
kubectl logs -n argo deployment/workflow-controller
# Verify secrets exist
kubectl get secret -n argo | grep -E "(minio|postgres)"π HTTPS Ingress issues:
# Check ingress status
kubectl -n argo get ingress
# Check SSL certificate secret
kubectl -n argo get secret eample-co-tls
# Verify NGINX ingress controller
kubectl get pods -n ingress-nginx
# Check ingress events
kubectl -n argo describe ingress argo-server-httpsπ Fluent Bit issues:
# Check Fluent Bit pods
kubectl get pods -n kube-system | grep fluent-bit
# Check Fluent Bit logs
kubectl logs -n kube-system daemonset/fluent-bit
# Verify Fluent Bit configuration
kubectl get configmap fluent-bit-config -n kube-system -o yaml
# Check if logs are being collected
kubectl logs -n kube-system daemonset/fluent-bit | grep "argo"
# Verify Graylog connectivity
kubectl exec -n kube-system daemonset/fluent-bit -- nc -zv 10.36.18.28 3333π Example Workflow issues:
# Check CronWorkflow status
kubectl -n argo get cronworkflows
# Check workflow executions
kubectl -n argo get workflows
# Check workflow logs
kubectl -n argo logs workflow/<workflow-name>
# Verify ConfigMaps and Secrets
kubectl -n argo get configmaps
kubectl -n argo get secrets
# Check image pull secrets
kubectl -n argo get secret bcr-eample-secret# Check Argo Workflows status
kubectl -n argo get all
# View workflow controller logs
kubectl -n argo logs -f deployment/workflow-controller
# View argo server logs
kubectl -n argo logs -f deployment/argo-server
# List all workflows
kubectl -n argo get workflows
# Get detailed workflow info
kubectl -n argo describe workflow <workflow-name>argo-workflow-setup/
βββ π README.md # This comprehensive guide
βββ π .gitignore # Git ignore file for sensitive data
βββ π argo-login-admin.yml # Admin authentication setup
βββ π€ argo-login-user-readonly.yml # Read-only user authentication setup
βββ π argo-admin-rules-allPossibleValues.yml # Enhanced admin rules with all permissions
βββ ποΈ postgres-cluster.yml # PostgreSQL cluster configuration
βββ βοΈ workflow-controller-configmap.yml # Workflow controller configuration
βββ π fluent-bit.yml # Fluent Bit logging configuration
βββ π https-ingress/ # HTTPS ingress configurations
β βββ π https-ingress.yml # Production HTTPS ingress with SSL
β βββ π http-ingress-forTest.yml # Development HTTP ingress
βββ π example-workflow/ # Example workflow templates
β βββ π example-workflow.yml # Example CronWorkflow template
β βββ π README.md # Example workflow documentation
βββ π minio-cluster/ # MinIO cluster configuration
βββ π kustomization.yml # Kustomize configuration
βββ π namespace.yml # MinIO namespace
βββ π secrets.yml # MinIO credentials
βββ πͺ£ tenant.yml # MinIO tenant configuration
βββ π services.yml # MinIO services (NodePort)
βββ πΎ storageclass.yml # Local storage class
βββ π¦ pvs.yml # Persistent volumes
βββ π§ pvs-generator.sh # PV generator script
Note: Files with .ignore extension (like *.yml.ignore) are excluded from version control and not shown in the file structure as they contain sensitive information or are used as templates.
Repository Management:
.gitignore: Excludes sensitive files, secrets, certificates, temporary files, and.ignorefiles from version control
Core Argo Workflows Files:
argo-login-admin.yml: Creates admin service account with full permissionsargo-login-user-readonly.yml: Creates user service account with read-only permissionsargo-admin-rules-allPossibleValues.yml: Enhanced admin rules with comprehensive permissionsworkflow-controller-configmap.yml: Configures workflow controller with PostgreSQL and MinIO
HTTPS Ingress Files:
https-ingress/https-ingress.yml: Production HTTPS ingress with SSL certificateshttps-ingress/http-ingress-forTest.yml: Development HTTP ingress for testing
Logging Files:
fluent-bit.yml: Fluent Bit configuration for centralized logging to Graylog
Example Workflow Files:
example-workflow/example-workflow.yml: Complete CronWorkflow example with retry strategiesexample-workflow/README.md: Detailed documentation for example workflows
PostgreSQL Files:
postgres-cluster.yml: CloudNativePG cluster configuration with secrets and services
MinIO Files:
minio-cluster/kustomization.yml: Kustomize configuration for MinIO deploymentminio-cluster/namespace.yml: Createsargo-minionamespaceminio-cluster/secrets.yml: MinIO credentials for bothargo-minioandargonamespacesminio-cluster/tenant.yml: MinIO tenant with 3 servers, 2 volumes eachminio-cluster/services.yml: NodePort services for MinIO API (32000) and Console (32443)minio-cluster/storageclass.yml: Local storage class for MinIO volumesminio-cluster/pvs.yml: Persistent volumes for local storage (auto-generated)minio-cluster/pvs-generator.sh: Script to generate persistent volumes for your nodes
After successful installation:
- π§ͺ Test workflows: Create a simple workflow to verify functionality
- π§ Configure: Set up workflow templates and cron workflows
- π Monitor: Set up monitoring and alerting for your workflows
- π Security: Review and customize RBAC permissions as needed
- π Learn: Explore the Argo Workflows documentation
- π Official Argo Workflows Documentation
- π Argo Workflows GitHub Repository
- π₯ Argo Workflows Examples
- π¬ Argo Community Slack
Happy Workflowing! πβ¨
π‘ Pro Tip: Bookmark this README for future reference and share it with your team members who need to set up Argo Workflows!
