This document captures the learning journey and technical concepts explored in building a Kubernetes homelab on a Raspberry Pi.
I built a complete Kubernetes homelab running on a Raspberry Pi 5 with:
- Hardware: Pi 5 (8GB RAM) + 512GB SSD (Pre-assembled desktop kit)
- OS: Debian 12 (Bookworm) - Raspberry Pi OS Desktop
- Kubernetes Cluster: Single-node K3s cluster
- Monitoring Stack: Prometheus + Grafana
- Persistent Storage: Local SSD storage with PVCs
- Ingress Controller: Nginx Ingress for external access
- Sample Applications: Echo server and storage test apps
What I Built:
- Deployed a simple echo server application
- Set up ingress for external access
- Learned basic Kubernetes concepts
Key Concepts Learned:
- Pods: The smallest deployable units in Kubernetes
- Deployments: Manage pod replicas and updates
- Services: Expose pods internally and externally
- Ingress: Route external traffic to services
- Namespaces: Organize resources logically
Technical Implementation:
# Pod โ Deployment โ Service โ Ingress flow
apiVersion: apps/v1
kind: Deployment
metadata:
name: echo-server
spec:
replicas: 2
selector:
matchLabels:
app: echo-server
template:
metadata:
labels:
app: echo-server
spec:
containers:
- name: echo
image: hashicorp/http-echo
args:
- -text="Hello from Kubernetes!"
ports:
- containerPort: 5678Why This Matters:
- Understanding the basic Kubernetes resource hierarchy
- Learning how applications are deployed and exposed
- Foundation for more complex deployments
What I Built:
- Storage test application with persistent volumes
- Data persistence across pod restarts
- Understanding storage in Kubernetes
Key Concepts Learned:
- PersistentVolume (PV): Physical storage resources
- PersistentVolumeClaim (PVC): Storage requests from pods
- StorageClass: Dynamic provisioning of storage
- Volume Mounts: How pods access storage
Technical Implementation:
# PVC requests storage
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: storage-pvc
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1Gi
# Pod mounts the PVC
spec:
containers:
- name: storage-test
volumeMounts:
- name: storage-volume
mountPath: /data
volumes:
- name: storage-volume
persistentVolumeClaim:
claimName: storage-pvcWhy This Matters:
- Applications need persistent data storage
- Understanding storage abstraction in Kubernetes
- Foundation for stateful applications
What I Built:
- Complete monitoring stack with Prometheus and Grafana
- Custom alert rules for system health
- Node Exporter dashboard for system metrics
- Comprehensive system monitoring
Key Concepts Learned:
- Metrics Collection: How Prometheus scrapes metrics
- Time Series Data: Storing and querying metrics over time
- Alerting: Proactive monitoring with alert rules
- Dashboards: Visualizing metrics with Grafana
- Service Discovery: Automatic discovery of monitoring targets
Technical Implementation:
# Prometheus alert rule example
groups:
- name: node_alerts
rules:
- alert: HighCPUUsage
expr: 100 - (avg by (instance) (irate(node_cpu_seconds_total{mode="idle"}[5m])) * 100) > 80
for: 2m
labels:
severity: warning
annotations:
summary: "High CPU usage on {{ $labels.instance }}"Metrics Flow:
Node Exporter โ Prometheus โ Grafana
โ โ โ
System Metrics โ Time Series โ Dashboards
Why This Matters:
- Production systems need monitoring
- Understanding system health and performance
- Proactive issue detection and alerting
What I Built:
- Horizontal Pod Autoscaler for the echo application
- CPU and memory-based scaling policies
- Load testing with hey/ab tools
- Real-time scaling visualization in Grafana
Key Concepts Learned:
- HPA: Automatic scaling based on resource utilization
- Metrics Server: Providing resource metrics to Kubernetes
- Scaling Policies: Gradual scale-up, conservative scale-down
- Load Testing: Simulating production traffic patterns
- Observability: Watching scaling events in real-time
Technical Implementation:
# HPA Configuration
apiVersion: autoscaling/v2
kind: HorizontalPodAutoscaler
metadata:
name: echo
spec:
scaleTargetRef:
apiVersion: apps/v1
kind: Deployment
name: echo
minReplicas: 1
maxReplicas: 10
metrics:
- type: Resource
resource:
name: cpu
target:
type: Utilization
averageUtilization: 70
- type: Resource
resource:
name: memory
target:
type: Utilization
averageUtilization: 80
behavior:
scaleDown:
stabilizationWindowSeconds: 300
policies:
- type: Percent
value: 10
periodSeconds: 60
scaleUp:
stabilizationWindowSeconds: 60
policies:
- type: Percent
value: 50
periodSeconds: 60
- type: Pods
value: 2
periodSeconds: 60
selectPolicy: MaxLoad Testing Commands:
# Light load test (should trigger scaling)
hey -n 10000 -c 50 -z 2m http://your-echo-url/
# Heavy load test (should max out scaling)
hey -n 50000 -c 100 -z 5m http://your-echo-url/
# Monitor scaling in real-time
kubectl get hpa -n apps -w
kubectl get pods -n apps -wMonitoring & Visualization:
Grafana Dashboard Features:
- Real-time replica count monitoring
- CPU/Memory utilization tracking
- Scaling events timeline
- HPA status and metrics
Key Metrics to Watch:
- Current vs Desired replicas
- CPU utilization percentage
- Memory utilization percentage
- Scaling events and timing
Why This Matters:
- Understanding automatic scaling in production environments
- Learning how to handle traffic spikes gracefully
- Observing the relationship between load and resource consumption
- Foundation for building scalable, resilient applications
โโโโโโโโโโโโโโโโโโโ โโโโโโโโโโโโโโโโโโโ โโโโโโโโโโโโโโโโโโโ
โ External โ โ Ingress โ โ Applications โ
โ Traffic โโโโโถโ Controller โโโโโถโ (Echo, etc.) โ
โโโโโโโโโโโโโโโโโโโ โโโโโโโโโโโโโโโโโโโ โโโโโโโโโโโโโโโโโโโ
โ
โผ
โโโโโโโโโโโโโโโโโโโ โโโโโโโโโโโโโโโโโโโ โโโโโโโโโโโโโโโโโโโ
โ Monitoring โ โ Kubernetes โ โ Storage โ
โ Stack โโโโโโ Cluster โโโโโถโ (PVCs) โ
โ (Prometheus โ โ (K3s) โ โ โ
โ + Grafana) โ โโโโโโโโโโโโโโโโโโโ โโโโโโโโโโโโโโโโโโโ
โโโโโโโโโโโโโโโโโโโ
- Application Traffic: External requests โ Ingress โ Services โ Pods
- Metrics Collection: Node Exporter โ Prometheus โ Grafana
- Storage: Applications โ PVC โ Local Storage
- Monitoring: System metrics โ Alert rules โ Notifications
- Why: Perfect for resource-constrained devices like Raspberry Pi
- Benefits: Small footprint, easy installation, full Kubernetes compatibility
- Learning: Understanding Kubernetes without overwhelming complexity
- Why: Package manager for Kubernetes applications
- Benefits: Easy deployment, version management, templating
- Learning: How to deploy complex applications with configuration
- Why: Industry standard monitoring stack
- Benefits: Powerful querying, flexible alerting, beautiful dashboards
- Learning: Production-grade monitoring concepts
- Why: Most popular ingress controller
- Benefits: Load balancing, SSL termination, path-based routing
- Learning: How external traffic reaches applications
- System Metrics: CPU, memory, disk, network
- Application Metrics: Pod status, restarts, resource usage
- Infrastructure: Node health, cluster status
- High CPU usage (>80% for 2 minutes)
- High memory usage (>85% for 2 minutes)
- High disk usage (>90% for 2 minutes)
- High load average (>2 for 2 minutes)
- Pod restart frequency (>5 in 1 hour)
- Node not ready (critical)
- Node Exporter Full dashboard (ID: 1860)
- System resource utilization
- Network traffic analysis
- Disk I/O monitoring
- Monitoring: Complete observability stack
- Alerting: Proactive issue detection
- Persistence: Data survives pod restarts
- Load Balancing: Ingress controller for traffic management
- Resource Management: Proper CPU/memory limits
- Autoscaling: Automatic scaling based on resource utilization
- Backup Strategy: Regular backups of persistent data
- Security: RBAC, network policies, secrets management
- High Availability: Multiple nodes, anti-affinity rules
- CI/CD: Automated deployment pipelines
- Logging: Centralized log aggregation (ELK stack)
- Resource Management: Pods, Deployments, Services, Ingress
- Storage: PVs, PVCs, StorageClasses
- Monitoring: Metrics, alerting, dashboards
- Networking: Services, ingress, load balancing
- Autoscaling: HPA, scaling policies, load testing
- Infrastructure as Code: YAML manifests for everything
- Monitoring & Alerting: Production-grade observability
- Troubleshooting: Debugging Kubernetes issues
- Documentation: Comprehensive setup and usage guides
- Microservices: Deploying and managing multiple services
- Stateful Applications: Handling persistent data
- Monitoring: Understanding system health and performance
- Scalability: Planning for growth and high availability
- Service Mesh: Istio for advanced traffic management
- Security: RBAC, network policies, secrets
- CI/CD: GitOps with ArgoCD or Flux
- Logging: ELK stack for centralized logging
- Backup & Recovery: Velero for cluster backups
- Multi-cluster Management: Federation or Karmada
- Custom Operators: Building Kubernetes controllers
- Performance Tuning: Optimizing resource usage
- Disaster Recovery: Backup and restore strategies
This homelab project provides a solid foundation for understanding modern cloud-native technologies. From basic Kubernetes concepts to production-ready monitoring, I've covered the essential skills needed for working with containerized applications in a distributed environment.
The hands-on experience with real hardware (Raspberry Pi) makes the learning more tangible and helps understand the practical challenges of running Kubernetes in resource-constrained environments.
Key Takeaway: Kubernetes is not just about containers - it's about building reliable, scalable, and observable systems that can run anywhere.
Each lab includes a 10-second video demonstration showing the key steps and results. Videos are embedded in the showcase website and can be viewed directly from the lab cards.
This project is licensed under the MIT License - see the LICENSE file for details.