Skip to content

Anyesh/wardrowbe

Repository files navigation

wardrowbe

wardrowbe

Put your wardrobe in rows. Snap. Organize. Wear.

Built with Claude Code

FeaturesQuick StartDeploymentArchitectureContributing

Self-hosted wardrobe management with AI-powered outfit recommendations. Take photos of your clothes, let AI tag them, and get daily outfit suggestions based on weather and occasion.

Features

  • Photo-based wardrobe - Upload photos, AI extracts clothing details automatically
  • Smart recommendations - Outfits matched to weather, occasion, and your preferences
  • Scheduled notifications - Daily outfit suggestions via ntfy/Mattermost/email
  • Family support - Manage wardrobes for household members
  • Wear tracking - History, ratings, and outfit feedback
  • Analytics - See what you wear, what you don't, color distribution
  • Fully self-hosted - Your data stays on your hardware
  • Works with any AI - OpenAI, Ollama, LocalAI, or any OpenAI-compatible API

Screenshots

Wardrobe & AI Tagging

Wardrobe Grid AI Analysis
Wardrobe AI Analysis

Outfit Suggestions & History

Suggestions History Calendar
Suggest History

Analytics & Pairing

Analytics Pairing
Analytics Pairing

Quick Start

Prerequisites

  • Docker and Docker Compose
  • An AI service (OpenAI API key, or local Ollama/LocalAI instance)

Setup

# Clone the repository
git clone https://github.com/yourusername/wardrowbe.git
cd wardrowbe

# Copy and configure environment
cp .env.example .env
# Edit .env with your settings

# Start everything
docker compose up -d

# Run database migrations
docker compose exec backend alembic upgrade head

Access the app at http://localhost:3000

Development Mode

For hot reloading during development:

docker compose -f docker-compose.yml -f docker-compose.dev.yml up -d

AI Configuration

Using Ollama (Recommended for Self-Hosting)

  1. Install Ollama
  2. Pull models:
    ollama pull llava      # Vision model for clothing analysis
    ollama pull llama3     # Text model for recommendations
  3. Configure in .env:
    AI_BASE_URL=http://host.docker.internal:11434/v1
    AI_VISION_MODEL=llava
    AI_TEXT_MODEL=llama3
    

Using OpenAI

AI_BASE_URL=https://api.openai.com/v1
AI_API_KEY=sk-your-api-key
AI_VISION_MODEL=gpt-4o
AI_TEXT_MODEL=gpt-4o

Using LocalAI

AI_BASE_URL=http://localai:8080/v1
AI_VISION_MODEL=gpt-4-vision-preview
AI_TEXT_MODEL=gpt-3.5-turbo

Architecture

┌─────────────────────────────────────────────────────────────┐
│                        Frontend                              │
│                   (Next.js + React Query)                    │
└─────────────────────────┬───────────────────────────────────┘
                          │
┌─────────────────────────▼───────────────────────────────────┐
│                        Backend                               │
│                   (FastAPI + SQLAlchemy)                     │
└──────────┬──────────────┬──────────────────┬────────────────┘
           │              │                  │
    ┌──────▼──────┐ ┌─────▼─────┐    ┌──────▼──────┐
    │  PostgreSQL │ │   Redis   │    │  AI Service │
    │  (Database) │ │ (Job Queue)│   │ (OpenAI/etc)│
    └─────────────┘ └─────┬─────┘    └─────────────┘
                          │
               ┌──────────▼──────────┐
               │   Background Worker │
               │    (arq - AI Jobs)  │
               └─────────────────────┘

Tech Stack

Layer Technology
Frontend Next.js 14, TypeScript, TanStack Query, Tailwind CSS, shadcn/ui
Backend FastAPI, SQLAlchemy (async), Pydantic, Python 3.11+
Database PostgreSQL 15
Cache/Queue Redis 7
Background Jobs arq
Authentication NextAuth.js (supports OIDC, dev credentials)
AI Any OpenAI-compatible API

Deployment

Docker Compose (Production)

See docker-compose.prod.yml for production configuration.

docker compose -f docker-compose.prod.yml up -d
docker compose exec backend alembic upgrade head

Kubernetes

See the k8s/ directory for Kubernetes manifests including:

  • PostgreSQL and Redis with persistent storage
  • Backend API and worker deployments
  • Next.js frontend
  • Ingress with TLS
  • Network policies

Configuration

Environment Variables

Variable Description Required
DATABASE_URL PostgreSQL connection string Yes
SECRET_KEY Backend secret for JWT Yes
NEXTAUTH_SECRET NextAuth session encryption Yes
AI_BASE_URL AI service URL Yes
AI_API_KEY AI API key (if required) Depends

See .env.example for all options.

Authentication

  • Development Mode (default): Simple email/name login
  • OIDC Mode: Authentik, Keycloak, Auth0, or any OIDC provider

Notifications

  • ntfy.sh: Free push notifications
  • Mattermost: Team messaging webhook
  • Email: SMTP-based

Weather

Uses Open-Meteo - free, no API key needed.

Development

Backend

cd backend
pip install -r requirements.txt

# Run tests
pytest

# Run with hot reload
uvicorn app.main:app --reload

Frontend

cd frontend
npm install

# Run dev server
npm run dev

# Run tests
npm test

# Build
npm run build

API Documentation

Available when running:

Contributing

Contributions are welcome! Please see CONTRIBUTING.md for guidelines.

License

This project is licensed under the MIT License - see the LICENSE file for details.

Requirements

  • Docker & Docker Compose
  • ~4GB RAM (with local Ollama models)
  • Storage for clothing photos

Works great on a Raspberry Pi 5!

About

Put your wardrobe in rows. Self-hosted AI-powered wardrobe management app.

Topics

Resources

License

Contributing

Security policy

Stars

Watchers

Forks