Skip to content

feat: Add Helm chart for Kubernetes deployment#50

Open
yashGoyal40 wants to merge 16 commits intorepowise-dev:mainfrom
yashGoyal40:feat/helm-chart
Open

feat: Add Helm chart for Kubernetes deployment#50
yashGoyal40 wants to merge 16 commits intorepowise-dev:mainfrom
yashGoyal40:feat/helm-chart

Conversation

@yashGoyal40
Copy link
Copy Markdown

@yashGoyal40 yashGoyal40 commented Apr 6, 2026

Summary

  • Adds a production-ready Helm chart (charts/repowise/) for deploying Repowise on Kubernetes
  • Includes configurable templates for Deployment, Service, PVC, Ingress, Secret, and ServiceAccount
  • Full values.yaml with support for LLM API keys, persistence, resource limits, ingress, and existing secrets
  • Chart README with quick start, configuration table, and usage examples

Closes #49

What's included

Template Purpose
deployment.yaml Single-pod deployment with Recreate strategy (SQLite constraint)
service.yaml ClusterIP service exposing backend (7337) and frontend (3000) ports
pvc.yaml Persistent storage for /data (SQLite DB + indexed repos)
secret.yaml API keys for Anthropic/OpenAI/Gemini (supports existingSecret)
ingress.yaml Optional ingress with TLS support
serviceaccount.yaml Optional dedicated ServiceAccount

Usage

helm install repowise ./charts/repowise \
  --set image.repository=your-registry/repowise \
  --set image.tag=0.1.0 \
  --set apiKeys.anthropic=sk-ant-...

Test plan

  • helm lint charts/repowise passes clean
  • helm template test charts/repowise renders all manifests correctly
  • Deploy to a test k8s cluster and verify pods come up healthy
  • Verify Web UI accessible via port-forward
  • Verify ingress routing when enabled

🤖 Generated with Claude Code

Adds a production-ready Helm chart under charts/repowise/ that enables
deploying Repowise to any Kubernetes cluster. Includes templates for
Deployment, Service, PVC, Ingress, Secret, and ServiceAccount with full
configurability via values.yaml.

Closes repowise-dev#49

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
yashGoyal40 and others added 15 commits April 6, 2026 18:14
The HTTPProxy was sending all traffic to the frontend (port 3000).
Now /api/*, /health, and /metrics are routed directly to the backend
(port 7337), while everything else goes to the frontend. Also replaced
the Ingress template with Contour HTTPProxy with wildcard TLS support.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
The backend exposes /health, not /api/health. The provider-section
component was calling the wrong endpoint causing "Server returned
non-healthy status" on every self-hosted deployment.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Restores the standard networking.k8s.io/v1 Ingress template so the
chart works out of the box on any Kubernetes cluster, not just those
running Contour.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Adds a post-install/upgrade Kubernetes Job that clones repos declared
in values.yaml into /data/repos/, registers them with the Repowise API,
and triggers an initial sync. Supports private repos via GitHub PAT or
an existing git-credentials Secret.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
- initContainer (bitnami/git) clones repos into /data/repos/ before
  the main app starts
- Sidecar container (curlimages/curl) waits for API health, registers
  each repo via POST /api/repos, and triggers sync
- Supports private repos via GitHub PAT or existing git-credentials Secret
- Removed the post-install Job approach (PVC ReadWriteOnce conflict)

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
- Increase liveness probe timeout to 15s and failureThreshold to 10
  to prevent pod kills during CPU-intensive indexing
- Sidecar registers repos one-by-one, waits for each sync to complete
  before starting the next (prevents SQLite database lock)
- Skip sync for repos that already have a head_commit (already indexed)
- Remove old repo-init-scripts ConfigMap (script is now inline)

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Indexing large repos is so CPU-intensive that the /health endpoint
becomes unresponsive, causing the liveness probe to kill the container
repeatedly. Disabled liveness probe by default — readiness probe is
kept (it only removes from service, doesn't restart).

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Adds optional PostgreSQL deployment (pgvector/pgvector:pg16) that
replaces SQLite, eliminating "database is locked" errors during heavy
indexing. Repowise app code already supports PostgreSQL natively.

- StatefulSet with PVC for PostgreSQL data
- Conditional REPOWISE_DB_URL (asyncpg when PG enabled, aiosqlite otherwise)
- wait-for-postgres initContainer ensures DB is ready before app starts
- pgvector image includes vector extension for semantic search
- Fully backward compatible: postgresql.enabled=false keeps SQLite

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
PostgreSQL eliminates SQLite's "database is locked" errors during
heavy indexing and enables concurrent API access. Uses pgvector image
for vector search support. SQLite still available via postgresql.enabled=false.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
With PostgreSQL as default, there's no SQLite lock issue. Repos now
trigger sync in parallel without waiting for each to complete.
Still skips already-indexed repos (head_commit check).

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
initContainer clones repos as root but app runs as uid 1000. Git
refuses to read repos with different ownership. Fix: write a
.gitconfig with safe.directory=* into /data and set HOME for the
app container. This enables hotspots, ownership, and architecture
graph features.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
The register-repos sidecar now sleeps forever after completing its
work. This prevents k8s from restarting it in a loop (containers
that exit get restarted by default in a pod).

Also bumps PostgreSQL to max_connections=4000, shared_buffers=2GB,
8Gi memory limit for heavy indexing workloads.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

feat: Add Helm chart for Kubernetes deployment

1 participant