Skip to content

persist-os/backend

Repository files navigation

Backend System

Welcome to the backend system, an AI-powered memory and intelligence platform that creates persistent, evolving AI understanding of users through sophisticated content processing and psychological insight generation.

Core Features

  • Crystal Intelligence System: Sophisticated AI pipeline that extracts and evolves psychological insights from user interactions
  • Unified Intelligence Processing: Advanced AI agents that process content into structured psychological patterns
  • Living Project Intelligence: Dynamic project fingerprinting and AI-powered widget generation
  • Background Job Processing: Redis-backed async processing for heavy AI operations
  • Context Enrichment: Multi-armed bandit learning for optimal context strategies
  • Vector Search Integration: Semantic content retrieval and matching
  • Database Integration: Comprehensive Convex toolkit for data operations

Project Structure

backend/
├── app/                           # Main application source code
│   ├── agents/                    # AI agents for content processing and intelligence
│   │   ├── persona_crystallization/  # Crystal intelligence system
│   │   │   ├── crystal_dam/          # Content accumulation and batching
│   │   │   ├── shard_formation/      # Psychological insight extraction
│   │   │   ├── crystal_formation/    # ML clustering and LLM synthesis
│   │   │   └── crystal_intelligence/ # Intelligence analysis and evolution
│   │   ├── unified_intelligence/     # Unified AI processing system
│   │   ├── smart_notes/              # Note processing and enhancement
│   │   ├── chaos_engine/             # Serendipitous discovery system
│   │   └── shared/                   # Shared AI utilities and services
│   ├── background_jobs/           # Redis-backed async job processing
│   │   ├── executors/             # Job execution handlers
│   │   ├── types/                 # Job type definitions
│   │   └── job_triggers.py        # Job triggering logic
│   ├── convex_toolkit/            # Convex database integration
│   │   ├── api/                   # Database API wrappers
│   │   └── atomic_operations/     # Thread-safe database operations
│   ├── routes/                    # FastAPI endpoints
│   │   ├── chat/                  # Chat and streaming endpoints
│   │   ├── crystal_formation/     # Crystal processing endpoints
│   │   ├── cognitive_field_formation/ # Cognitive field processing
│   │   └── project_discovery/     # Project fingerprinting
│   ├── models/                    # Pydantic models for API validation
│   ├── prompts/                   # AI prompt templates
│   ├── config.py                  # Application configuration
│   └── main.py                    # FastAPI application entry point
├── tests/                         # Comprehensive test suite
├── requirements.txt               # Python dependencies
└── .env.example                   # Environment variable template

Key Components Explained

Crystal Intelligence System (app/agents/persona_crystallization/)

The core AI system that transforms user interactions into psychological insights:

  • Crystal Dam: Intelligent content accumulation with multi-trigger processing (token/word/item thresholds)
  • Shard Formation: AI-powered extraction of psychological insights from content
  • Crystal Formation: ML clustering + LLM synthesis creates coherent personality patterns
  • Crystal Management: Intelligent evolution system that updates existing crystals instead of creating duplicates
  • Vector Matching: Semantic similarity prevents insight duplication with 75% threshold

Unified Intelligence System (app/agents/unified_intelligence/)

Advanced AI processing that handles multiple entity types:

  • Intelligence Orchestrator: Coordinates processing of crystals, shards, stardust, and cognitive fields
  • Multi-Armed Bandit Learning: Optimizes context enrichment strategies per user
  • Background Processing: Async intelligence analysis and evolution
  • Convergence Integration: Self-learning optimization pipeline

Background Job System (app/background_jobs/)

Redis-backed async processing for heavy AI operations:

  • Job Executors: Handlers for crystal formation, shard extraction, cognitive field processing
  • Job Triggers: Automatic triggering based on content thresholds and user activity
  • Atomic Operations: Thread-safe processing with distributed locks
  • Error Recovery: Comprehensive error handling and retry mechanisms

Convex Toolkit (app/convex_toolkit/)

Comprehensive database integration:

  • API Wrappers: Type-safe database operations for all entities
  • Atomic Operations: Thread-safe database transactions
  • Response Handling: Centralized error handling and data validation
  • Background Sync: Automatic synchronization with frontend state

Technologies Used

  • Backend Framework: FastAPI with async/await support
  • AI Framework: Agno framework for modular AI agents
  • AI Models: Google Gemini via Vertex AI for sophisticated processing
  • Database: Convex with real-time synchronization
  • Background Processing: Redis-backed job queue with async executors
  • Vector Search: Semantic content retrieval and matching
  • Data Validation: Pydantic models for API validation
  • Language: Python 3.x with comprehensive type hints
  • Testing: pytest with comprehensive test coverage
  • Dependency Management: pip with requirements.txt

Setup and Installation

  1. Clone the repository:

    git clone [repository-url]
    cd backend
  2. Create and activate a Python virtual environment:

    python3 -m venv venv
    source venv/bin/activate
    # On Windows use `venv\Scripts\activate`
  3. Install dependencies:

    pip install -r requirements.txt
  4. Set up environment variables:

    • Copy the example file: cp .env.example .env
    • Edit the .env file with your credentials:
      # Convex Configuration
      CONVEX_URL=your_convex_url
      CONVEX_DEPLOYMENT_KEY=your_deployment_key
      
      # Google Cloud / Vertex AI
      GOOGLE_APPLICATION_CREDENTIALS=path/to/service-account.json
      PROJECT_ID=your_gcp_project_id
      
      # Redis Configuration
      REDIS_URL=redis://localhost:6379
      
      # Backend Configuration
      BACKEND_URL=http://localhost:8000

Running the Application

  1. Ensure your environment variables are set in the .env file.

  2. Start Redis (required for background job processing):

    redis-server
  3. Start the FastAPI development server:

    uvicorn app.main:app --reload --host 0.0.0.0 --port 8000
    • --reload: Enables auto-reloading when code changes
    • --host 0.0.0.0: Makes the server accessible on your local network
    • --port 8000: Specifies the port to run on
  4. Access the application:

    • API Documentation: http://127.0.0.1:8000/docs
    • Health Check: http://127.0.0.1:8000/health
    • Background Jobs: http://127.0.0.1:8000/jobs/status

Testing

The backend includes comprehensive test coverage:

  • Unit Tests: Individual component testing in app/tests/
  • Integration Tests: End-to-end testing in tests/
  • Crystal System Tests: Specialized tests for crystal formation and intelligence
  • Background Job Tests: Async processing and job queue testing
  • API Tests: FastAPI endpoint testing with authentication

Running Tests

```bash

Run all tests

pytest

Run specific test categories

pytest tests/test_crystal_formation.py pytest tests/test_background_jobs.py pytest tests/test_unified_intelligence.py

Run with coverage

pytest --cov=app --cov-report=html


### Key Test Files

- `test_crystal_formation_v2.py` - Crystal formation pipeline testing
- `test_unified_intelligence_system.py` - Unified intelligence processing
- `test_chaos_engine.py` - Serendipitous discovery system
- `test_convergence_sdk.py` - Convergence SDK integration

## Key Features

### 🔮 Crystal Intelligence System
Sophisticated AI pipeline that extracts and evolves psychological insights:
- **Content Processing**: Intelligent batching and analysis of user interactions
- **Shard Extraction**: AI analyzes content for psychological insights
- **Crystal Formation**: ML clustering + LLM synthesis creates coherent personality patterns
- **Intelligent Evolution**: System updates existing crystals instead of creating duplicates
- **Vector Matching**: Semantic similarity prevents insight duplication

### 🧠 Unified Intelligence Processing
Advanced AI processing that handles multiple entity types:
- **Intelligence Orchestrator**: Coordinates processing of crystals, shards, stardust, and cognitive fields
- **Multi-Armed Bandit Learning**: Optimizes context enrichment strategies per user
- **Background Processing**: Async intelligence analysis and evolution
- **Convergence Integration**: Self-learning optimization pipeline

### 🌌 Living Project Intelligence
Dynamic project fingerprinting and AI-powered widget generation:
- **Project Discovery**: AI discovers project characteristics through conversation
- **Fingerprint Evolution**: Projects learn and adapt based on user interactions
- **Widget Generation**: AI creates personalized tools for each project
- **Project Mapping**: Visual representation of project relationships

### ⚡ Background Job Processing
Redis-backed async processing for heavy AI operations:
- **Job Executors**: Handlers for crystal formation, shard extraction, cognitive field processing
- **Job Triggers**: Automatic triggering based on content thresholds and user activity
- **Atomic Operations**: Thread-safe processing with distributed locks
- **Error Recovery**: Comprehensive error handling and retry mechanisms

## API Documentation

Once the server is running, interactive API documentation is available:

* **Swagger UI:** `http://127.0.0.1:8000/docs`
* **ReDoc:** `http://127.0.0.1:8000/redoc`

### Key Endpoints

- `/api/v1/chat/stream` - Main chat interface with streaming responses
- `/api/v1/lab/message` - Thinking lab with enhanced context processing
- `/api/v1/crystal/formation` - Crystal formation and management
- `/api/v1/cognitive-field/formation` - Cognitive field processing
- `/api/v1/project-discovery/generate` - Project fingerprinting
- `/api/v1/ambient-insights` - Background AI insight generation

## Development Practices

* Follow standard Python coding conventions (PEP 8)
* Write clear, concise code with comprehensive type hints
* Add unit or integration tests for new features
* Use async/await patterns for all I/O operations
* Implement atomic operations for database transactions
* Follow the Agno framework conventions for agent development
* Manage prompts effectively in the `app/prompts` directory
* Use distributed locks for thread-safe operations

## Deployment

### Google Cloud Run
1. **Ensure your code is committed and pushed to your repository**
2. **Use the Google Cloud SDK (gcloud):**
   ```bash
   # Deploy the service
   gcloud run deploy your-service-name --source . --platform managed --region us-central1

Environment Variables for Production

# Convex Configuration
CONVEX_URL=your_production_convex_url
CONVEX_DEPLOYMENT_KEY=your_production_deployment_key

# Google Cloud / Vertex AI
GOOGLE_APPLICATION_CREDENTIALS=/path/to/service-account.json
PROJECT_ID=your_gcp_project_id

# Redis Configuration
REDIS_URL=redis://your_redis_instance:6379

# Backend Configuration
BACKEND_URL=https://your_backend_domain.com

Performance & Scalability

  • Throughput: 100-1000 shards per user processed efficiently
  • Processing Time: 30-120 seconds for crystal formation (depending on cluster count)
  • Output: 5-50 crystals per user with intelligent evolution
  • Cost Efficiency: 95% reduction vs naive embedding approach
  • Parallel Processing: Embedding generation and batch operations
  • Memory Efficient: Temporary embeddings with immediate cleanup
  • Database Optimized: Minimal storage footprint with proper indexing

Contributing

Contributions are welcome! Please follow these steps:

  1. Fork the repository.
  2. Create a new branch for your feature or bug fix (git checkout -b feature/your-feature-name).
  3. Make your changes, adhering to the development practices.
  4. Ensure all tests pass.
  5. Commit your changes with clear messages.
  6. Push your branch to your fork.
  7. Submit a pull request to the main repository.

About

Code for backend API system called by app.

Resources

Contributing

Security policy

Stars

Watchers

Forks

Packages

No packages published

Contributors 5

Languages