Skip to content

A local-first AI CLI that embodies the Trust open framework privacy principles. Fork of Gemini CLI using GGUF models for complete offline capability.

License

Notifications You must be signed in to change notification settings

audit-brands/trust-cli

Β 
Β 

Folders and files

NameName
Last commit message
Last commit date

Latest commit

Β 

History

1,320 Commits
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 

Repository files navigation

The Trust CLI

The official local-first AI command-line engine for The Open Assurance Collective.

The Trust CLI is the flagship tool in the Trust Tool Suite, published by The Open Assurance Collective. It is a privacy-first AI assistant that runs entirely on your local machine, giving you complete control over your data and AI workflows.

This tool is the "how" of the Collective's missionβ€”it takes the principles, modules, and patterns from the trust-framework and makes them actionable, automating workflows and bringing modern assurance practices to the command line. It is designed for auditors, GRC professionals, security engineers, and any practitioner who needs to build trust in complex systems.

πŸ›‘οΈ Key Features

  • 100% Local Execution - All AI inference runs on your hardware
  • Privacy First - Your data never leaves your machine
  • Multi-Format Support - 10 models across 6 formats (Llama, Phi, Qwen, Mistral, Gemma, DeepSeek)
  • Smart Recommendations - Task-specific model suggestions with RAM-aware filtering
  • Model Management - Download, verify, and manage GGUF models locally
  • Performance Monitoring - Real-time system metrics and optimization
  • Hardware-Aware - Automatic optimization based on your system specs
  • Cryptographic Security - SHA-256 verification for all models
  • Transparent - Open source with full audit trail

πŸš€ Quick Start

Prerequisites: Ollama (Recommended)

For the best experience, we strongly recommend installing Ollama, which provides the fastest and easiest way to run local AI models.

Download Ollama for macOS, Windows, and Linux

After installing, you can pull models directly from the command line (e.g., ollama pull qwen2.5:1.5b), and the Trust CLI will automatically detect and use them when available.

Installation

# Clone the repository
git clone https://github.com/audit-brands/trust-cli.git
cd trust-cli

# Install dependencies
npm install

# Build the project
npm run build

# Bundle for production use
npm run bundle

# Run Trust CLI
node bundle/trust.js

First Steps

IMPORTANT: All trust commands are run from your regular terminal (not from within the trust CLI interface).

Navigate to your trust-cli directory and run commands using the following format:

  1. Check available models

    # From your regular terminal in the trust-cli directory:
    node bundle/trust.js model list
  2. Download a model (start with the lightweight one)

    node bundle/trust.js model download qwen2.5-1.5b-instruct
  3. Verify model integrity

    node bundle/trust.js model verify qwen2.5-1.5b-instruct
  4. Switch to the downloaded model

    node bundle/trust.js model switch qwen2.5-1.5b-instruct
  5. Start interactive mode (optional)

    node bundle/trust.js

πŸ’‘ Create an Alias for Easier Use

To avoid typing node bundle/trust.js every time, create an alias:

# Option 1: Using full path (recommended for permanent setup)
# Add this to your ~/.zshrc (macOS) or ~/.bashrc (Linux):
alias trust="node /full/path/to/your/trust-cli/bundle/trust.js"

# Option 2: Using current directory (works from trust-cli folder)
# Add this to your ~/.zshrc file:
alias trust="node $(pwd)/bundle/trust.js"

# Reload your shell configuration:
source ~/.zshrc  # or source ~/.bashrc

# Now you can run commands simply as:
trust model list
trust model download qwen2.5-1.5b-instruct
trust model recommend coding

πŸ“¦ Model Management

Trust CLI provides comprehensive model management capabilities:

List Available Models

node bundle/trust.js model list
# or with alias: trust model list

Shows all available models with their status, RAM requirements, and trust scores.

Download Models

node bundle/trust.js model download <model-name>
# or with alias: trust model download <model-name>

Downloads models from Hugging Face with real-time progress tracking:

  • Progress percentage, speed, and ETA
  • Automatic integrity verification
  • Resume support for interrupted downloads

Verify Model Integrity

node bundle/trust.js model verify <model-name>
# or verify all models:
node bundle/trust.js model verify
# or with alias: trust model verify

Performs SHA256 hash verification to ensure model integrity and security.

Switch Active Model

node bundle/trust.js model switch <model-name>
# or with alias: trust model switch <model-name>

Changes the active model for inference operations.

Get Model Recommendations

node bundle/trust.js model recommend <task-type>
# or with alias: trust model recommend <task-type>

Get intelligent model recommendations based on your task and hardware:

  • coding - Phi models optimized for programming tasks
  • multilingual - Mistral models for international/translation work
  • reasoning - DeepSeek models for complex analysis
  • quick - Qwen models for fast responses
  • context - Large context models for document processing
  • quality - Highest trust score models within RAM limits

Delete Models

trust model delete <model-name>

Remove downloaded models to free up disk space.

πŸ“Š Performance Monitoring

Trust CLI includes comprehensive performance monitoring tools:

Quick Status

trust perf status

Shows current system status including CPU, memory, and heap usage.

Detailed Report

trust perf report

Comprehensive system performance report with:

  • System resources (CPU, RAM, load averages)
  • Node.js memory details
  • Inference performance history
  • Hardware specifications

Real-time Monitoring

trust perf watch

Live performance monitoring with updates every second. Press Ctrl+C to stop.

Optimization Suggestions

trust perf optimize

Get personalized recommendations for optimal model settings based on your hardware:

  • Recommended RAM allocation
  • Optimal context sizes
  • Best quantization methods
  • Model suggestions for your system

πŸ›‘οΈ Privacy & Security Management

Trust CLI offers three privacy modes for different security requirements:

Privacy Modes

trust privacy list          # View all available privacy modes
trust privacy status        # Show current privacy configuration
trust privacy switch strict # Switch to strict privacy mode
trust privacy info moderate # Get detailed info about moderate mode

Available Modes:

  • Strict: Maximum privacy - no external connections, mandatory verification
  • Moderate: Balanced privacy and functionality - recommended for most users
  • Open: Full functionality for development and testing

πŸ’¬ Advanced Chat & Context Management

Streaming Conversations

trust chat                  # Start interactive chat with streaming
trust chat --model phi-3.5-mini-instruct  # Use specific model

Codebase Analysis

trust analyze ./src         # Analyze entire codebase
trust context --files "*.ts" --importance high  # Add specific files

Git Integration

trust git status           # Analyze repository status
trust git review           # AI-powered code review
trust git suggest-commit   # Generate commit messages

🎯 Benchmarking & Testing

Performance Benchmarks

trust benchmark run        # Run comprehensive benchmark suite
trust benchmark quick      # Quick performance test
trust benchmark compare    # Compare multiple models

Model Testing

trust test model <name>    # Test specific model performance
trust test inference       # Test inference pipeline
trust test streaming       # Test streaming capabilities

πŸ“š Offline Help System

Comprehensive Documentation

trust help                 # Main help menu
trust help models          # Model management help
trust help performance     # Performance monitoring help
trust help privacy         # Privacy and security help
trust help search <query>  # Search help topics

Interactive UI

trust ui                   # Launch advanced terminal UI
trust ui models            # Interactive model manager
trust ui benchmark         # Live benchmarking interface

πŸ€– Available Models

Trust CLI now supports 10 models across 6 different model formats with intelligent task-specific recommendations:

Lightweight Models (2-4GB RAM)

Model Format Parameters RAM Context Description Trust Score
qwen2.5-1.5b-instruct Qwen 1.5B 2GB 4K Ultra-fast for quick responses 8.8/10
gemma-2-2b-instruct Gemma 2.6B 3GB 8K Compact Google model with larger context 8.9/10
phi-3.5-mini-instruct Phi 3.8B 3GB 4K Optimized for coding and technical tasks 9.5/10
phi-3.5-mini-uncensored Phi 3.8B 3GB 4K Uncensored for risk analysis & auditing 9.3/10
llama-3.2-3b-instruct Llama 3B 4GB 4K Balanced performance for general use 9.2/10

Mid-Range Models (6-8GB RAM)

Model Format Parameters RAM Context Description Trust Score
mistral-7b-instruct Mistral 7B 6GB 8K Efficient multilingual model 9.1/10
deepseek-r1-distill-7b DeepSeek 7.6B 6GB 4K Advanced reasoning for complex analysis 9.6/10
llama-3.1-8b-instruct Llama 8B 8GB 4K High-quality responses for complex tasks 9.7/10
gemma-2-9b-instruct Gemma 9B 8GB 8K Advanced Google model with strong performance 9.3/10

Large Context Models (10GB+ RAM)

Model Format Parameters RAM Context Description Trust Score
mistral-nemo-12b-instruct Mistral 12B 10GB 128K Massive context for document analysis 9.4/10

πŸ“₯ Downloading Models

All models can be downloaded directly without authentication:

Lightweight Models (Great for getting started):

trust model download qwen2.5-1.5b-instruct     # 1.8GB - Ultra-fast responses
trust model download gemma-2-2b-instruct       # 1.6GB - Google's compact model
trust model download phi-3.5-mini-instruct     # 2.4GB - Excellent for coding
trust model download llama-3.2-3b-instruct     # 1.9GB - Balanced general use

Mid-Range Models (Best performance/resource balance):

trust model download mistral-7b-instruct       # 4.4GB - Great for multilingual
trust model download deepseek-r1-distill-7b    # 4.5GB - Advanced reasoning & analysis
trust model download llama-3.1-8b-instruct     # 4.9GB - Highest quality responses
trust model download gemma-2-9b-instruct       # 5.4GB - Advanced Google model

Large Context Models (For document processing):

trust model download mistral-nemo-12b-instruct # 6.9GB - 128K context window

Specialized Models:

trust model download phi-3.5-mini-uncensored   # 2.4GB - Risk analysis & auditing

🎯 Smart Model Recommendations

Use Trust CLI's intelligent recommendation system to get the perfect model for your task:

trust model recommend coding        # β†’ Recommends Phi models
trust model recommend multilingual  # β†’ Recommends Mistral models
trust model recommend reasoning     # β†’ Recommends DeepSeek models
trust model recommend quick         # β†’ Recommends Qwen models
trust model recommend context       # β†’ Recommends Mistral Nemo (128K context)
trust model recommend --ram 16      # β†’ Shows models that fit in 16GB RAM

Model Selection Guide by Use Case:

πŸš€ Getting Started (2-4GB RAM):

  • qwen2.5-1.5b-instruct: Start here - fastest responses, minimal resources
  • gemma-2-2b-instruct: More capable, 8K context window
  • phi-3.5-mini-instruct: Best for coding and technical work

πŸ’Ό Professional Work (6-8GB RAM):

  • mistral-7b-instruct: Multilingual projects, efficient performance
  • deepseek-r1-distill-7b: Complex analysis, step-by-step reasoning
  • llama-3.1-8b-instruct: Highest quality general responses

πŸ“š Document & Research Work (10GB+ RAM):

  • mistral-nemo-12b-instruct: 128K context for processing entire documents

πŸ” Security & Risk Analysis:

  • phi-3.5-mini-uncensored: For auditors who need unfiltered model responses

All models use community GGUF conversions that are publicly accessible.

HF Authentication (Future Use)

Trust-cli supports Hugging Face authentication for future gated models:

# Set up authentication (for gated models in the future)
trust auth login --hf-token YOUR_TOKEN

# Check authentication status
trust auth status

# Remove authentication
trust auth logout

Currently, all included models are publicly accessible without authentication.

πŸ”§ Configuration

Trust CLI stores its configuration and models in ~/.trustcli/:

  • models/ - Downloaded model files
  • models.json - Model configurations and metadata
  • config.json - CLI settings

Environment Variables

  • TRUST_MODEL - Override default model selection
  • TRUST_MODELS_DIR - Custom models directory location

🎯 Multi-Model Architecture & Setup

Trust CLI now supports three different AI backends with intelligent fallback to give you complete choice over your AI inference approach:

πŸš€ Ollama Integration (Recommended)

Best for: Fast setup, excellent performance, easy model management

Ollama provides the optimal balance of performance, simplicity, and privacy. It uses OpenAI-compatible APIs with native tool calling support.

Quick Ollama Setup:

# 1. Install Ollama (Linux/macOS)
curl -fsSL https://ollama.ai/install.sh | sh

# 2. Start Ollama service
ollama serve

# 3. Trust CLI will automatically detect and use Ollama
trust
> hello world

Ollama Model Management:

# List available models
ollama list

# Pull recommended models
ollama pull qwen2.5:1.5b      # Lightweight (1.5B, ~1GB)
ollama pull qwen2.5:7b        # Balanced (7B, ~4GB)
ollama pull llama3.2:3b       # Alternative (3B, ~2GB)

# Trust CLI will automatically use the best available model

Ollama Configuration:

# Set preferred Ollama model
trust config set ai.ollama.defaultModel qwen2.5:7b

# Adjust timeout for slower hardware
trust config set ai.ollama.timeout 180000  # 3 minutes

# Set custom Ollama URL (if running remotely)
trust config set ai.ollama.baseUrl http://your-server:11434

🏠 HuggingFace Models (Local GGUF)

Best for: Complete offline operation, fine-grained control, zero dependencies

This is the original Trust CLI approach using locally downloaded GGUF models.

HuggingFace Local Setup:

# Download models directly
trust model download qwen2.5-1.5b-instruct
trust model download phi-3.5-mini-instruct

# Switch to downloaded model
trust model switch qwen2.5-1.5b-instruct

# Trust CLI will use HuggingFace models if Ollama isn't available

HuggingFace Local Configuration:

# Enable/disable HuggingFace local fallback
trust config set ai.trustLocal.enabled true

# Enable GBNF grammar-based function calling
trust config set ai.trustLocal.gbnfFunctions true

🌐 Cloud Models (Fallback)

Best for: Maximum performance, latest capabilities, when local resources are limited

Cloud integration provides access to the most advanced models but requires internet connectivity.

Cloud Setup:

# Enable cloud fallback
trust config set ai.cloud.enabled true

# Set cloud provider
trust config set ai.cloud.provider google  # or 'openai', 'anthropic'

# Configure authentication (see existing auth docs)
trust auth login --provider google

βš™οΈ Backend Configuration & Preferences

Trust CLI automatically tries backends in order: Ollama β†’ HuggingFace β†’ Cloud

View Current Configuration:

# Check current backend status
trust config show

# See which backend is active
trust status

# View AI configuration
trust config get ai

Customize Backend Order:

# Change fallback order
trust config set ai.fallbackOrder "ollama,huggingface,cloud"

# Disable fallback (use only preferred backend)
trust config set ai.enableFallback false

# Set preferred backend
trust config set ai.preferredBackend ollama

Backend-Specific Settings:

# Ollama settings
trust config set ai.ollama.defaultModel qwen2.5:1.5b
trust config set ai.ollama.timeout 120000
trust config set ai.ollama.maxToolCalls 3

# HuggingFace local settings
trust config set ai.trustLocal.enabled true
trust config set ai.trustLocal.gbnfFunctions true

# Cloud settings
trust config set ai.cloud.enabled false
trust config set ai.cloud.provider google

πŸ”„ Intelligent Fallback System

Trust CLI automatically selects the best available backend:

  1. πŸš€ Ollama: If running on localhost:11434
  2. 🏠 HuggingFace: If GGUF models are downloaded
  3. 🌐 Cloud: If configured and enabled

Fallback Behavior:

# Example fallback scenario:
# 1. Try Ollama (preferred) β†’ Success βœ…
trust
πŸ”§ AI Backend Configuration: ollama β†’ huggingface β†’ cloud (fallback: enabled)
βœ… Successfully initialized ollama backend
πŸš€ Using Ollama for content generation

# 2. Try Ollama β†’ Fail β†’ Try HuggingFace β†’ Success βœ…
trust
πŸ”§ AI Backend Configuration: ollama β†’ huggingface β†’ cloud (fallback: enabled)
❌ Failed to initialize ollama backend: connection refused
βœ… Successfully initialized huggingface backend
🏠 Using HuggingFace models for content generation

πŸ“Š Backend Comparison

Feature Ollama HuggingFace Cloud
Setup Simple Moderate Simple
Performance Fast Medium Fastest
Privacy Private Private Shared
Offline Yes Yes No
Model Selection Extensive Curated Latest
Tool Calling Native GBNF Native
Resource Usage Low Medium None

🎯 Recommended Setups

Quick Start (Ollama):

# Best for most users
curl -fsSL https://ollama.ai/install.sh | sh
ollama serve &
ollama pull qwen2.5:1.5b
trust  # Automatically detects and uses Ollama

Maximum Privacy (HuggingFace Only):

# Disable cloud, use only local models
trust config set ai.cloud.enabled false
trust config set ai.fallbackOrder "ollama,huggingface"
trust model download phi-3.5-mini-instruct

Hybrid Setup (Best of Both):

# Ollama for general use, HuggingFace for sensitive work
ollama pull qwen2.5:7b
trust model download deepseek-r1-distill-7b
trust config set ai.fallbackOrder "ollama,huggingface,cloud"

Enterprise/Security (Local Only):

# Complete local operation, no cloud fallback
trust config set ai.enableFallback false
trust config set ai.preferredBackend huggingface
trust config set ai.cloud.enabled false

πŸ”§ Configuration File Location

All settings are stored in ~/.trustcli/config.json:

{
  "ai": {
    "preferredBackend": "ollama",
    "fallbackOrder": ["ollama", "huggingface", "cloud"],
    "enableFallback": true,
    "ollama": {
      "baseUrl": "http://localhost:11434",
      "defaultModel": "qwen2.5:1.5b",
      "timeout": 120000,
      "maxToolCalls": 3
    },
    "trustLocal": {
      "enabled": true,
      "gbnfFunctions": true
    },
    "cloud": {
      "enabled": false,
      "provider": "google"
    }
  }
}

This multi-model architecture gives you complete flexibility while maintaining the privacy-first principles of Trust CLI!

πŸ’‘ Examples

Interactive Mode

Start Trust CLI and have a conversation:

trust
> How can I optimize this Python function for better performance?

Direct Prompts

Run one-off queries:

trust -p "Explain the concept of quantum computing in simple terms"

File Analysis

Analyze code or documents:

trust -p "Review this code for security vulnerabilities" < app.js

πŸ“ Working with Files

Trust CLI provides powerful file operations for creating, reading, and modifying files. Understanding how to prompt for file operations is key to effective usage.

🎯 Creating Files: Prompt Strategies

The key to getting Trust CLI to create files is being explicit about filesystem operations:

βœ… Successful File Creation Prompts:

Interactive Mode (Recommended):

trust
> Generate Python code to analyze data.csv with pandas
[Model shows code]
> Save that code to a file named analyze.py
[Model creates the file]

Command Line - Explicit Filesystem Language:

trust -p "Save to disk a Python script named analyze.py that reads data.csv using pandas"
trust -p "Create a new file called config.json containing these settings: {...}"
trust -p "Write to the filesystem: a shell script named deploy.sh that builds and deploys the app"

Advanced File Creation Commands:

# Method 1: Direct tool reference (most explicit)
trust -p "Use the Write tool to create analyze.py with pandas code for BankChurners.csv"

# Method 2: Filesystem-specific verbs (clear intent)
trust -p "Save the following to disk as analyze.py: Python code that reads CSV files"

# Method 3: Explicit file path (absolute clarity)
trust -p "Create /home/user/analyze.py file containing Python pandas code"

When to use each method:

  • Method 1 (Tool reference): When you need guaranteed file creation and the model isn't responding to other prompts
  • Method 2 (Filesystem verbs): Best balance of clarity and natural language - recommended for most use cases
  • Method 3 (Explicit paths): When working with specific directories or when relative paths might be ambiguous

Two-Step Instructions:

trust -p "I need you to: 1) Generate Python code for CSV analysis, and 2) Save it to analyze.py"

❌ Common Prompts That Only Show Code:

trust -p "Create a Python script for data analysis"     # Too ambiguous
trust -p "Write Python code to read CSV files"          # Sounds like composition
trust -p "Generate a data analysis program"             # No file operation implied

πŸ“– Reading and Analyzing Files

Trust CLI can read and analyze existing files in your directory:

# Interactive mode
trust
> Read the file data.csv and show me the first few rows
> Analyze the structure of config.json and suggest improvements
> Compare the data in sales_q1.csv and sales_q2.csv

# Command line
trust -p "Read requirements.txt and explain what each dependency is for"
trust -p "Analyze app.py for potential security vulnerabilities"
trust -p "Find all TODO comments in the files in this directory"

πŸ”§ Modifying Existing Files

# Interactive mode
trust
> Read analyze.py and add error handling to the CSV reading function
> Update the config.json file to include the new database settings

# Command line
trust -p "Add logging statements to the main function in app.py"
trust -p "Update requirements.txt to include pandas version 2.0"

πŸ’‘ Best Practices for File Operations

  1. Be Explicit About Intent:

    • βœ… "Save to disk" / "Write to filesystem" / "Create a file"
    • ❌ "Create" / "Write" / "Generate" (ambiguous)
  2. Use Two-Step Instructions:

    • First: Generate/create the content
    • Second: Save/write it to a specific file
  3. Specify File Paths Clearly:

    trust -p "Create ./scripts/backup.sh with a bash script that backs up the database"
  4. Interactive Mode for Complex Operations:

    • Better context preservation
    • Natural conversation flow
    • Easier to iterate and refine

πŸ—‚οΈ File System Navigation

Trust CLI understands your current directory and can work with relative paths:

trust -p "List all Python files in this directory and summarize their purpose"
trust -p "Find all CSV files and create a data processing script for each one"
trust -p "Read the README.md and create a summary document"

πŸ“ Common File Creation Patterns

Configuration Files:

trust -p "Save a package.json file for a Node.js project with these dependencies: express, axios"

Scripts and Automation:

trust -p "Create deploy.sh that builds the project and uploads to the server"

Data Analysis:

trust -p "Generate analyze_sales.py that reads sales.csv and creates monthly revenue charts"

Documentation:

trust -p "Create API_DOCS.md documenting the REST endpoints in server.js"

The key insight: Trust CLI models interpret "create/write code" as showing you code, but "save/write to disk/create a file" as filesystem operations!

πŸ› οΈ Available Tools Reference

Trust CLI provides a comprehensive set of tools that models can use to interact with your system. Understanding these tools helps you craft better prompts and understand what's possible.

πŸ“ File Operations

Tool Purpose Example Usage
read_file Read file contents with optional line ranges "Read the config.json file"
write_file Create new files or overwrite existing ones "Save this code to app.py"
edit Make targeted edits to existing files "Add error handling to line 50 in server.js"
ls List directory contents "Show me all files in the src directory"

πŸ” Search & Discovery

Tool Purpose Example Usage
grep Search file contents using regex patterns "Find all TODO comments in Python files"
glob Find files by name patterns "List all .env files in the project"
read_many_files Read multiple files efficiently "Read all configuration files"

🌐 Web & Network

Tool Purpose Example Usage
web_fetch Fetch and analyze web pages "Get the latest documentation from this URL"
web_search Search the web for information "Search for Node.js best practices"

πŸ’» System Operations

Tool Purpose Example Usage
shell Execute shell commands "Run npm install and show the output"

🧠 Memory & Context

Tool Purpose Example Usage
memory_tool Manage conversation memory and context Automatically used for context management

πŸ”Œ Extensibility

Tool Purpose Example Usage
mcp_tool Interface with Model Context Protocol servers Custom integrations and extensions

🎯 How to Use Tools Effectively

Explicit Tool References:

# Direct tool usage (most reliable)
trust -p "Use the read_file tool to show me the contents of package.json"
trust -p "Use the write_file tool to create a new script called deploy.sh"

Natural Language (Trust CLI interprets intent):

# Models understand these and select appropriate tools
trust -p "Read the README file and summarize the installation steps"
trust -p "Search for all functions named 'authenticate' in the codebase"
trust -p "Create a backup script for the database"

Interactive Mode Tool Usage:

trust
> Read the main configuration file
> Now edit it to add the new database settings
> Save the changes and show me a diff

πŸ“‹ Tool Capabilities & Limitations

File Operations:

  • βœ… Read files of any size (with chunking for large files)
  • βœ… Create new files with any content
  • βœ… Edit existing files with precise line-by-line changes
  • βœ… Handle binary files (images, etc.) appropriately
  • ⚠️ Some operations may require confirmation in interactive mode

Search Operations:

  • βœ… Fast regex search across multiple files
  • βœ… Glob pattern matching for file discovery
  • βœ… Context-aware search results
  • ⚠️ Large repositories may have performance implications

Shell Operations:

  • βœ… Full bash command execution
  • βœ… Environment variable access
  • βœ… Background process support
  • ⚠️ Commands require confirmation for security
  • ⚠️ Destructive operations may be blocked

Web Operations:

  • βœ… Fetch and analyze web content
  • βœ… Search engines integration
  • βœ… Markdown conversion of HTML content
  • ⚠️ Respects robots.txt and rate limits

πŸ’‘ Tool Usage Tips

  1. Be Specific About File Paths:

    trust -p "Read ./src/config/database.js"  # Clear path
  2. Combine Multiple Operations:

    trust -p "Find all .js files, read the main ones, and create a project overview"
  3. Use Interactive Mode for Complex Workflows:

    trust
    > Search for authentication functions
    > Read the main auth file
    > Add rate limiting to the login function
    > Test the changes
  4. Leverage Tool Combinations:

    trust -p "Use grep to find API endpoints, then read those files and document the API"

These tools make Trust CLI powerful for development workflows, code analysis, documentation, and automation tasks!

🎯 Solving Local AI Challenges

Trust CLI directly addresses the core challenges of local AI deployment:

Performance Gap Solution

  • Smart Model Recommendations: Automatic task-optimized model selection
  • Hardware-Aware Optimization: Real-time performance tuning based on your system
  • Performance Monitoring: Live metrics and optimization suggestions
trust model recommend coding  # Get optimal model for coding tasks
trust perf optimize          # Get personalized performance tips

Memory Management Solution

  • Intelligent Model Swapping: RAM-aware model switching with validation
  • Quantization Optimization: Automatic selection of optimal compression levels
  • Resource Monitoring: Real-time memory usage tracking and warnings
System RAM: 16GB | Available: 8GB | Recommended: phi-3.5-mini-instruct (4GB)

Model Compatibility Solution

  • Universal Interface: Unified API across all model types (Llama, Phi, Qwen, etc.)
  • Auto-Detection System: Automatic model type and format recognition
  • Standardized Configuration: Consistent model handling and optimization

Trust & Verification Innovation

  • Cryptographic Integrity: SHA256 hash verification for all models
  • Community Trust Scoring: Transparent model quality ratings
  • Provenance Tracking: Complete model download and verification history

πŸ—οΈ Architecture

Trust CLI is built on modern, secure foundations:

  • node-llama-cpp - High-performance C++ inference engine
  • GGUF Format - Efficient quantized model format
  • SHA256 Verification - Cryptographic integrity checking
  • TypeScript - Type-safe, maintainable codebase
  • React + Ink - Beautiful terminal UI components

Guiding Principles

The development of the Trust CLI is guided by the core principles and philosophies of The Open Assurance Collective. Key architectural and user experience (UX) decisions are formally documented in the trust-framework repository.

Before making a significant change, all contributors should review our core architectural philosophies:

πŸ”’ Privacy & Security

Trust CLI is designed with privacy as the top priority:

  1. No Network Calls - Except for initial model downloads from Hugging Face
  2. Local Storage Only - All data stays on your machine
  3. Cryptographic Verification - All models are SHA256 verified
  4. Open Source - Fully auditable codebase
  5. No Telemetry - We don't track anything

🀝 Contributing

We welcome contributions! Please see our Contributing Guide for details.

Development Setup

# Install dependencies
npm install

# Run in development mode
npm start

# Run tests
npm test

# Run end-to-end tests
node test-end-to-end.js

πŸ“ License

This project is licensed under the Apache License 2.0 - See LICENSE file for details.

πŸ™ Attribution & Acknowledgments

Trust CLI is based on Google's Gemini CLI, modified to use local GGUF models instead of cloud APIs for complete privacy and local-first AI workflows.

Original Work:

  • Source: Google Gemini CLI
  • Copyright: 2025 Google LLC
  • License: Apache License 2.0
  • Attribution: Original Gemini CLI code is Copyright Google LLC

Derivative Work:

  • Trust CLI: Copyright 2025 Audit Risk Media LLC
  • Modifications: Complete replacement of cloud APIs with local model inference, addition of privacy features, model management, performance monitoring, and comprehensive test suite
  • License: Apache License 2.0 (same as original)

For detailed attribution and list of modifications, see NOTICE file.

We're grateful for the excellent foundation provided by Google's original Gemini CLI project.

πŸ› οΈ Troubleshooting

"OAuth/Google authentication required" Error

If you're getting OAuth authentication errors when expecting local model inference, you may have legacy settings from an earlier version. Fix this by updating your authentication mode:

# Quick fix - update settings to use local models
echo '{
  "selectedAuthType": "huggingface",
  "theme": "GitHub"
}' > ~/.gemini/settings.json

Alternative manual fix:

# Edit your settings file
nano ~/.gemini/settings.json

# Change "oauth-personal" to "huggingface"
# Change: "selectedAuthType": "oauth-personal"
# To: "selectedAuthType": "huggingface"

After updating, restart trust-cli and it should use local models instead of trying to authenticate with Google.

"No model loaded" Error

This usually means you need to download a model first:

# Download the lightweight model
node bundle/trust.js model download qwen2.5-1.5b-instruct

# Switch to the downloaded model
node bundle/trust.js model switch qwen2.5-1.5b-instruct

# Verify the model is loaded
node bundle/trust.js model list

Models Not Showing After Git Pull/Rebuild

If you're only seeing 4 models instead of 6 after pulling updates, this is usually a caching issue:

Step 1: Clear Trust CLI Cache

# Clear all trust-cli cached data
rm -rf ~/.trustcli

# Also clear legacy cache if it exists
rm -rf ~/.gemini

Step 2: Force Clean Rebuild (if cache clearing doesn't work)

# Force clean everything
rm -rf node_modules
rm -rf packages/*/node_modules
rm -rf packages/*/dist
rm -rf bundle
rm -f package-lock.json
rm -f packages/*/package-lock.json

# Fresh install and build
npm install
npm run build
npm run bundle

Step 3: Update Your Alias

# Test with direct path first
node bundle/trust.js model list

# Update your alias to current directory
alias trust="node $(pwd)/bundle/trust.js"

# Make permanent by adding to ~/.zshrc or ~/.bashrc
echo 'alias trust="node /full/path/to/your/trust-cli/bundle/trust.js"' >> ~/.zshrc
source ~/.zshrc

Step 4: Verify All 10 Models Appear You should see all models across 6 different formats:

  • Qwen: qwen2.5-1.5b-instruct (1.5B, 2GB)
  • Gemma: gemma-2-2b-instruct (2.6B, 3GB), gemma-2-9b-instruct (9B, 8GB)
  • Phi: phi-3.5-mini-instruct (3.8B, 3GB), phi-3.5-mini-uncensored (3.8B, 3GB)
  • Llama: llama-3.2-3b-instruct (3B, 4GB), llama-3.1-8b-instruct (8B, 8GB)
  • Mistral: mistral-7b-instruct (7B, 6GB), mistral-nemo-12b-instruct (12B, 10GB)
  • DeepSeek: deepseek-r1-distill-7b (7.6B, 6GB)

TypeScript Build Errors on Node.js v24+

If you encounter TypeScript build errors on newer Node.js versions:

# Pull the latest fixes
git pull origin main

# Rebuild
npm run build
npm run bundle

πŸ›Ÿ Support


Trust: An Open System for Modern Assurance

Built with ❀️ for privacy, transparency, and local-first AI.

About

A local-first AI CLI that embodies the Trust open framework privacy principles. Fork of Gemini CLI using GGUF models for complete offline capability.

Resources

License

Contributing

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • TypeScript 97.0%
  • JavaScript 3.0%