The official local-first AI command-line engine for The Open Assurance Collective.
The Trust CLI is the flagship tool in the Trust Tool Suite, published by The Open Assurance Collective. It is a privacy-first AI assistant that runs entirely on your local machine, giving you complete control over your data and AI workflows.
This tool is the "how" of the Collective's missionβit takes the principles, modules, and patterns from the trust-framework and makes them actionable, automating workflows and bringing modern assurance practices to the command line. It is designed for auditors, GRC professionals, security engineers, and any practitioner who needs to build trust in complex systems.
- 100% Local Execution - All AI inference runs on your hardware
- Privacy First - Your data never leaves your machine
- Multi-Format Support - 10 models across 6 formats (Llama, Phi, Qwen, Mistral, Gemma, DeepSeek)
- Smart Recommendations - Task-specific model suggestions with RAM-aware filtering
- Model Management - Download, verify, and manage GGUF models locally
- Performance Monitoring - Real-time system metrics and optimization
- Hardware-Aware - Automatic optimization based on your system specs
- Cryptographic Security - SHA-256 verification for all models
- Transparent - Open source with full audit trail
For the best experience, we strongly recommend installing Ollama, which provides the fastest and easiest way to run local AI models.
Download Ollama for macOS, Windows, and Linux
After installing, you can pull models directly from the command line (e.g., ollama pull qwen2.5:1.5b), and the Trust CLI will automatically detect and use them when available.
# Clone the repository
git clone https://github.com/audit-brands/trust-cli.git
cd trust-cli
# Install dependencies
npm install
# Build the project
npm run build
# Bundle for production use
npm run bundle
# Run Trust CLI
node bundle/trust.jsIMPORTANT: All trust commands are run from your regular terminal (not from within the trust CLI interface).
Navigate to your trust-cli directory and run commands using the following format:
-
Check available models
# From your regular terminal in the trust-cli directory: node bundle/trust.js model list -
Download a model (start with the lightweight one)
node bundle/trust.js model download qwen2.5-1.5b-instruct
-
Verify model integrity
node bundle/trust.js model verify qwen2.5-1.5b-instruct
-
Switch to the downloaded model
node bundle/trust.js model switch qwen2.5-1.5b-instruct
-
Start interactive mode (optional)
node bundle/trust.js
To avoid typing node bundle/trust.js every time, create an alias:
# Option 1: Using full path (recommended for permanent setup)
# Add this to your ~/.zshrc (macOS) or ~/.bashrc (Linux):
alias trust="node /full/path/to/your/trust-cli/bundle/trust.js"
# Option 2: Using current directory (works from trust-cli folder)
# Add this to your ~/.zshrc file:
alias trust="node $(pwd)/bundle/trust.js"
# Reload your shell configuration:
source ~/.zshrc # or source ~/.bashrc
# Now you can run commands simply as:
trust model list
trust model download qwen2.5-1.5b-instruct
trust model recommend codingTrust CLI provides comprehensive model management capabilities:
node bundle/trust.js model list
# or with alias: trust model listShows all available models with their status, RAM requirements, and trust scores.
node bundle/trust.js model download <model-name>
# or with alias: trust model download <model-name>Downloads models from Hugging Face with real-time progress tracking:
- Progress percentage, speed, and ETA
- Automatic integrity verification
- Resume support for interrupted downloads
node bundle/trust.js model verify <model-name>
# or verify all models:
node bundle/trust.js model verify
# or with alias: trust model verifyPerforms SHA256 hash verification to ensure model integrity and security.
node bundle/trust.js model switch <model-name>
# or with alias: trust model switch <model-name>Changes the active model for inference operations.
node bundle/trust.js model recommend <task-type>
# or with alias: trust model recommend <task-type>Get intelligent model recommendations based on your task and hardware:
coding- Phi models optimized for programming tasksmultilingual- Mistral models for international/translation workreasoning- DeepSeek models for complex analysisquick- Qwen models for fast responsescontext- Large context models for document processingquality- Highest trust score models within RAM limits
trust model delete <model-name>Remove downloaded models to free up disk space.
Trust CLI includes comprehensive performance monitoring tools:
trust perf statusShows current system status including CPU, memory, and heap usage.
trust perf reportComprehensive system performance report with:
- System resources (CPU, RAM, load averages)
- Node.js memory details
- Inference performance history
- Hardware specifications
trust perf watchLive performance monitoring with updates every second. Press Ctrl+C to stop.
trust perf optimizeGet personalized recommendations for optimal model settings based on your hardware:
- Recommended RAM allocation
- Optimal context sizes
- Best quantization methods
- Model suggestions for your system
Trust CLI offers three privacy modes for different security requirements:
trust privacy list # View all available privacy modes
trust privacy status # Show current privacy configuration
trust privacy switch strict # Switch to strict privacy mode
trust privacy info moderate # Get detailed info about moderate modeAvailable Modes:
- Strict: Maximum privacy - no external connections, mandatory verification
- Moderate: Balanced privacy and functionality - recommended for most users
- Open: Full functionality for development and testing
trust chat # Start interactive chat with streaming
trust chat --model phi-3.5-mini-instruct # Use specific modeltrust analyze ./src # Analyze entire codebase
trust context --files "*.ts" --importance high # Add specific filestrust git status # Analyze repository status
trust git review # AI-powered code review
trust git suggest-commit # Generate commit messagestrust benchmark run # Run comprehensive benchmark suite
trust benchmark quick # Quick performance test
trust benchmark compare # Compare multiple modelstrust test model <name> # Test specific model performance
trust test inference # Test inference pipeline
trust test streaming # Test streaming capabilitiestrust help # Main help menu
trust help models # Model management help
trust help performance # Performance monitoring help
trust help privacy # Privacy and security help
trust help search <query> # Search help topicstrust ui # Launch advanced terminal UI
trust ui models # Interactive model manager
trust ui benchmark # Live benchmarking interfaceTrust CLI now supports 10 models across 6 different model formats with intelligent task-specific recommendations:
| Model | Format | Parameters | RAM | Context | Description | Trust Score |
|---|---|---|---|---|---|---|
| qwen2.5-1.5b-instruct | Qwen | 1.5B | 2GB | 4K | Ultra-fast for quick responses | 8.8/10 |
| gemma-2-2b-instruct | Gemma | 2.6B | 3GB | 8K | Compact Google model with larger context | 8.9/10 |
| phi-3.5-mini-instruct | Phi | 3.8B | 3GB | 4K | Optimized for coding and technical tasks | 9.5/10 |
| phi-3.5-mini-uncensored | Phi | 3.8B | 3GB | 4K | Uncensored for risk analysis & auditing | 9.3/10 |
| llama-3.2-3b-instruct | Llama | 3B | 4GB | 4K | Balanced performance for general use | 9.2/10 |
| Model | Format | Parameters | RAM | Context | Description | Trust Score |
|---|---|---|---|---|---|---|
| mistral-7b-instruct | Mistral | 7B | 6GB | 8K | Efficient multilingual model | 9.1/10 |
| deepseek-r1-distill-7b | DeepSeek | 7.6B | 6GB | 4K | Advanced reasoning for complex analysis | 9.6/10 |
| llama-3.1-8b-instruct | Llama | 8B | 8GB | 4K | High-quality responses for complex tasks | 9.7/10 |
| gemma-2-9b-instruct | Gemma | 9B | 8GB | 8K | Advanced Google model with strong performance | 9.3/10 |
| Model | Format | Parameters | RAM | Context | Description | Trust Score |
|---|---|---|---|---|---|---|
| mistral-nemo-12b-instruct | Mistral | 12B | 10GB | 128K | Massive context for document analysis | 9.4/10 |
All models can be downloaded directly without authentication:
Lightweight Models (Great for getting started):
trust model download qwen2.5-1.5b-instruct # 1.8GB - Ultra-fast responses
trust model download gemma-2-2b-instruct # 1.6GB - Google's compact model
trust model download phi-3.5-mini-instruct # 2.4GB - Excellent for coding
trust model download llama-3.2-3b-instruct # 1.9GB - Balanced general useMid-Range Models (Best performance/resource balance):
trust model download mistral-7b-instruct # 4.4GB - Great for multilingual
trust model download deepseek-r1-distill-7b # 4.5GB - Advanced reasoning & analysis
trust model download llama-3.1-8b-instruct # 4.9GB - Highest quality responses
trust model download gemma-2-9b-instruct # 5.4GB - Advanced Google modelLarge Context Models (For document processing):
trust model download mistral-nemo-12b-instruct # 6.9GB - 128K context windowSpecialized Models:
trust model download phi-3.5-mini-uncensored # 2.4GB - Risk analysis & auditingUse Trust CLI's intelligent recommendation system to get the perfect model for your task:
trust model recommend coding # β Recommends Phi models
trust model recommend multilingual # β Recommends Mistral models
trust model recommend reasoning # β Recommends DeepSeek models
trust model recommend quick # β Recommends Qwen models
trust model recommend context # β Recommends Mistral Nemo (128K context)
trust model recommend --ram 16 # β Shows models that fit in 16GB RAMπ Getting Started (2-4GB RAM):
qwen2.5-1.5b-instruct: Start here - fastest responses, minimal resourcesgemma-2-2b-instruct: More capable, 8K context windowphi-3.5-mini-instruct: Best for coding and technical work
πΌ Professional Work (6-8GB RAM):
mistral-7b-instruct: Multilingual projects, efficient performancedeepseek-r1-distill-7b: Complex analysis, step-by-step reasoningllama-3.1-8b-instruct: Highest quality general responses
π Document & Research Work (10GB+ RAM):
mistral-nemo-12b-instruct: 128K context for processing entire documents
π Security & Risk Analysis:
phi-3.5-mini-uncensored: For auditors who need unfiltered model responses
All models use community GGUF conversions that are publicly accessible.
Trust-cli supports Hugging Face authentication for future gated models:
# Set up authentication (for gated models in the future)
trust auth login --hf-token YOUR_TOKEN
# Check authentication status
trust auth status
# Remove authentication
trust auth logoutCurrently, all included models are publicly accessible without authentication.
Trust CLI stores its configuration and models in ~/.trustcli/:
models/- Downloaded model filesmodels.json- Model configurations and metadataconfig.json- CLI settings
TRUST_MODEL- Override default model selectionTRUST_MODELS_DIR- Custom models directory location
Trust CLI now supports three different AI backends with intelligent fallback to give you complete choice over your AI inference approach:
Best for: Fast setup, excellent performance, easy model management
Ollama provides the optimal balance of performance, simplicity, and privacy. It uses OpenAI-compatible APIs with native tool calling support.
# 1. Install Ollama (Linux/macOS)
curl -fsSL https://ollama.ai/install.sh | sh
# 2. Start Ollama service
ollama serve
# 3. Trust CLI will automatically detect and use Ollama
trust
> hello world# List available models
ollama list
# Pull recommended models
ollama pull qwen2.5:1.5b # Lightweight (1.5B, ~1GB)
ollama pull qwen2.5:7b # Balanced (7B, ~4GB)
ollama pull llama3.2:3b # Alternative (3B, ~2GB)
# Trust CLI will automatically use the best available model# Set preferred Ollama model
trust config set ai.ollama.defaultModel qwen2.5:7b
# Adjust timeout for slower hardware
trust config set ai.ollama.timeout 180000 # 3 minutes
# Set custom Ollama URL (if running remotely)
trust config set ai.ollama.baseUrl http://your-server:11434Best for: Complete offline operation, fine-grained control, zero dependencies
This is the original Trust CLI approach using locally downloaded GGUF models.
# Download models directly
trust model download qwen2.5-1.5b-instruct
trust model download phi-3.5-mini-instruct
# Switch to downloaded model
trust model switch qwen2.5-1.5b-instruct
# Trust CLI will use HuggingFace models if Ollama isn't available# Enable/disable HuggingFace local fallback
trust config set ai.trustLocal.enabled true
# Enable GBNF grammar-based function calling
trust config set ai.trustLocal.gbnfFunctions trueBest for: Maximum performance, latest capabilities, when local resources are limited
Cloud integration provides access to the most advanced models but requires internet connectivity.
# Enable cloud fallback
trust config set ai.cloud.enabled true
# Set cloud provider
trust config set ai.cloud.provider google # or 'openai', 'anthropic'
# Configure authentication (see existing auth docs)
trust auth login --provider googleTrust CLI automatically tries backends in order: Ollama β HuggingFace β Cloud
# Check current backend status
trust config show
# See which backend is active
trust status
# View AI configuration
trust config get ai# Change fallback order
trust config set ai.fallbackOrder "ollama,huggingface,cloud"
# Disable fallback (use only preferred backend)
trust config set ai.enableFallback false
# Set preferred backend
trust config set ai.preferredBackend ollama# Ollama settings
trust config set ai.ollama.defaultModel qwen2.5:1.5b
trust config set ai.ollama.timeout 120000
trust config set ai.ollama.maxToolCalls 3
# HuggingFace local settings
trust config set ai.trustLocal.enabled true
trust config set ai.trustLocal.gbnfFunctions true
# Cloud settings
trust config set ai.cloud.enabled false
trust config set ai.cloud.provider googleTrust CLI automatically selects the best available backend:
- π Ollama: If running on
localhost:11434 - π HuggingFace: If GGUF models are downloaded
- π Cloud: If configured and enabled
# Example fallback scenario:
# 1. Try Ollama (preferred) β Success β
trust
π§ AI Backend Configuration: ollama β huggingface β cloud (fallback: enabled)
β
Successfully initialized ollama backend
π Using Ollama for content generation
# 2. Try Ollama β Fail β Try HuggingFace β Success β
trust
π§ AI Backend Configuration: ollama β huggingface β cloud (fallback: enabled)
β Failed to initialize ollama backend: connection refused
β
Successfully initialized huggingface backend
π Using HuggingFace models for content generation| Feature | Ollama | HuggingFace | Cloud |
|---|---|---|---|
| Setup | Simple | Moderate | Simple |
| Performance | Fast | Medium | Fastest |
| Privacy | Private | Private | Shared |
| Offline | Yes | Yes | No |
| Model Selection | Extensive | Curated | Latest |
| Tool Calling | Native | GBNF | Native |
| Resource Usage | Low | Medium | None |
# Best for most users
curl -fsSL https://ollama.ai/install.sh | sh
ollama serve &
ollama pull qwen2.5:1.5b
trust # Automatically detects and uses Ollama# Disable cloud, use only local models
trust config set ai.cloud.enabled false
trust config set ai.fallbackOrder "ollama,huggingface"
trust model download phi-3.5-mini-instruct# Ollama for general use, HuggingFace for sensitive work
ollama pull qwen2.5:7b
trust model download deepseek-r1-distill-7b
trust config set ai.fallbackOrder "ollama,huggingface,cloud"# Complete local operation, no cloud fallback
trust config set ai.enableFallback false
trust config set ai.preferredBackend huggingface
trust config set ai.cloud.enabled falseAll settings are stored in ~/.trustcli/config.json:
{
"ai": {
"preferredBackend": "ollama",
"fallbackOrder": ["ollama", "huggingface", "cloud"],
"enableFallback": true,
"ollama": {
"baseUrl": "http://localhost:11434",
"defaultModel": "qwen2.5:1.5b",
"timeout": 120000,
"maxToolCalls": 3
},
"trustLocal": {
"enabled": true,
"gbnfFunctions": true
},
"cloud": {
"enabled": false,
"provider": "google"
}
}
}This multi-model architecture gives you complete flexibility while maintaining the privacy-first principles of Trust CLI!
Start Trust CLI and have a conversation:
trust
> How can I optimize this Python function for better performance?Run one-off queries:
trust -p "Explain the concept of quantum computing in simple terms"Analyze code or documents:
trust -p "Review this code for security vulnerabilities" < app.jsTrust CLI provides powerful file operations for creating, reading, and modifying files. Understanding how to prompt for file operations is key to effective usage.
The key to getting Trust CLI to create files is being explicit about filesystem operations:
Interactive Mode (Recommended):
trust
> Generate Python code to analyze data.csv with pandas
[Model shows code]
> Save that code to a file named analyze.py
[Model creates the file]Command Line - Explicit Filesystem Language:
trust -p "Save to disk a Python script named analyze.py that reads data.csv using pandas"
trust -p "Create a new file called config.json containing these settings: {...}"
trust -p "Write to the filesystem: a shell script named deploy.sh that builds and deploys the app"Advanced File Creation Commands:
# Method 1: Direct tool reference (most explicit)
trust -p "Use the Write tool to create analyze.py with pandas code for BankChurners.csv"
# Method 2: Filesystem-specific verbs (clear intent)
trust -p "Save the following to disk as analyze.py: Python code that reads CSV files"
# Method 3: Explicit file path (absolute clarity)
trust -p "Create /home/user/analyze.py file containing Python pandas code"When to use each method:
- Method 1 (Tool reference): When you need guaranteed file creation and the model isn't responding to other prompts
- Method 2 (Filesystem verbs): Best balance of clarity and natural language - recommended for most use cases
- Method 3 (Explicit paths): When working with specific directories or when relative paths might be ambiguous
Two-Step Instructions:
trust -p "I need you to: 1) Generate Python code for CSV analysis, and 2) Save it to analyze.py"trust -p "Create a Python script for data analysis" # Too ambiguous
trust -p "Write Python code to read CSV files" # Sounds like composition
trust -p "Generate a data analysis program" # No file operation impliedTrust CLI can read and analyze existing files in your directory:
# Interactive mode
trust
> Read the file data.csv and show me the first few rows
> Analyze the structure of config.json and suggest improvements
> Compare the data in sales_q1.csv and sales_q2.csv
# Command line
trust -p "Read requirements.txt and explain what each dependency is for"
trust -p "Analyze app.py for potential security vulnerabilities"
trust -p "Find all TODO comments in the files in this directory"# Interactive mode
trust
> Read analyze.py and add error handling to the CSV reading function
> Update the config.json file to include the new database settings
# Command line
trust -p "Add logging statements to the main function in app.py"
trust -p "Update requirements.txt to include pandas version 2.0"-
Be Explicit About Intent:
- β "Save to disk" / "Write to filesystem" / "Create a file"
- β "Create" / "Write" / "Generate" (ambiguous)
-
Use Two-Step Instructions:
- First: Generate/create the content
- Second: Save/write it to a specific file
-
Specify File Paths Clearly:
trust -p "Create ./scripts/backup.sh with a bash script that backs up the database" -
Interactive Mode for Complex Operations:
- Better context preservation
- Natural conversation flow
- Easier to iterate and refine
Trust CLI understands your current directory and can work with relative paths:
trust -p "List all Python files in this directory and summarize their purpose"
trust -p "Find all CSV files and create a data processing script for each one"
trust -p "Read the README.md and create a summary document"Configuration Files:
trust -p "Save a package.json file for a Node.js project with these dependencies: express, axios"Scripts and Automation:
trust -p "Create deploy.sh that builds the project and uploads to the server"Data Analysis:
trust -p "Generate analyze_sales.py that reads sales.csv and creates monthly revenue charts"Documentation:
trust -p "Create API_DOCS.md documenting the REST endpoints in server.js"The key insight: Trust CLI models interpret "create/write code" as showing you code, but "save/write to disk/create a file" as filesystem operations!
Trust CLI provides a comprehensive set of tools that models can use to interact with your system. Understanding these tools helps you craft better prompts and understand what's possible.
| Tool | Purpose | Example Usage |
|---|---|---|
read_file |
Read file contents with optional line ranges | "Read the config.json file" |
write_file |
Create new files or overwrite existing ones | "Save this code to app.py" |
edit |
Make targeted edits to existing files | "Add error handling to line 50 in server.js" |
ls |
List directory contents | "Show me all files in the src directory" |
| Tool | Purpose | Example Usage |
|---|---|---|
grep |
Search file contents using regex patterns | "Find all TODO comments in Python files" |
glob |
Find files by name patterns | "List all .env files in the project" |
read_many_files |
Read multiple files efficiently | "Read all configuration files" |
| Tool | Purpose | Example Usage |
|---|---|---|
web_fetch |
Fetch and analyze web pages | "Get the latest documentation from this URL" |
web_search |
Search the web for information | "Search for Node.js best practices" |
| Tool | Purpose | Example Usage |
|---|---|---|
shell |
Execute shell commands | "Run npm install and show the output" |
| Tool | Purpose | Example Usage |
|---|---|---|
memory_tool |
Manage conversation memory and context | Automatically used for context management |
| Tool | Purpose | Example Usage |
|---|---|---|
mcp_tool |
Interface with Model Context Protocol servers | Custom integrations and extensions |
Explicit Tool References:
# Direct tool usage (most reliable)
trust -p "Use the read_file tool to show me the contents of package.json"
trust -p "Use the write_file tool to create a new script called deploy.sh"Natural Language (Trust CLI interprets intent):
# Models understand these and select appropriate tools
trust -p "Read the README file and summarize the installation steps"
trust -p "Search for all functions named 'authenticate' in the codebase"
trust -p "Create a backup script for the database"Interactive Mode Tool Usage:
trust
> Read the main configuration file
> Now edit it to add the new database settings
> Save the changes and show me a diffFile Operations:
- β Read files of any size (with chunking for large files)
- β Create new files with any content
- β Edit existing files with precise line-by-line changes
- β Handle binary files (images, etc.) appropriately
β οΈ Some operations may require confirmation in interactive mode
Search Operations:
- β Fast regex search across multiple files
- β Glob pattern matching for file discovery
- β Context-aware search results
β οΈ Large repositories may have performance implications
Shell Operations:
- β Full bash command execution
- β Environment variable access
- β Background process support
β οΈ Commands require confirmation for securityβ οΈ Destructive operations may be blocked
Web Operations:
- β Fetch and analyze web content
- β Search engines integration
- β Markdown conversion of HTML content
β οΈ Respects robots.txt and rate limits
-
Be Specific About File Paths:
trust -p "Read ./src/config/database.js" # Clear path
-
Combine Multiple Operations:
trust -p "Find all .js files, read the main ones, and create a project overview" -
Use Interactive Mode for Complex Workflows:
trust > Search for authentication functions > Read the main auth file > Add rate limiting to the login function > Test the changes
-
Leverage Tool Combinations:
trust -p "Use grep to find API endpoints, then read those files and document the API"
These tools make Trust CLI powerful for development workflows, code analysis, documentation, and automation tasks!
Trust CLI directly addresses the core challenges of local AI deployment:
- Smart Model Recommendations: Automatic task-optimized model selection
- Hardware-Aware Optimization: Real-time performance tuning based on your system
- Performance Monitoring: Live metrics and optimization suggestions
trust model recommend coding # Get optimal model for coding tasks
trust perf optimize # Get personalized performance tips- Intelligent Model Swapping: RAM-aware model switching with validation
- Quantization Optimization: Automatic selection of optimal compression levels
- Resource Monitoring: Real-time memory usage tracking and warnings
System RAM: 16GB | Available: 8GB | Recommended: phi-3.5-mini-instruct (4GB)- Universal Interface: Unified API across all model types (Llama, Phi, Qwen, etc.)
- Auto-Detection System: Automatic model type and format recognition
- Standardized Configuration: Consistent model handling and optimization
- Cryptographic Integrity: SHA256 hash verification for all models
- Community Trust Scoring: Transparent model quality ratings
- Provenance Tracking: Complete model download and verification history
Trust CLI is built on modern, secure foundations:
- node-llama-cpp - High-performance C++ inference engine
- GGUF Format - Efficient quantized model format
- SHA256 Verification - Cryptographic integrity checking
- TypeScript - Type-safe, maintainable codebase
- React + Ink - Beautiful terminal UI components
The development of the Trust CLI is guided by the core principles and philosophies of The Open Assurance Collective. Key architectural and user experience (UX) decisions are formally documented in the trust-framework repository.
Before making a significant change, all contributors should review our core architectural philosophies:
Trust CLI is designed with privacy as the top priority:
- No Network Calls - Except for initial model downloads from Hugging Face
- Local Storage Only - All data stays on your machine
- Cryptographic Verification - All models are SHA256 verified
- Open Source - Fully auditable codebase
- No Telemetry - We don't track anything
We welcome contributions! Please see our Contributing Guide for details.
# Install dependencies
npm install
# Run in development mode
npm start
# Run tests
npm test
# Run end-to-end tests
node test-end-to-end.jsThis project is licensed under the Apache License 2.0 - See LICENSE file for details.
Trust CLI is based on Google's Gemini CLI, modified to use local GGUF models instead of cloud APIs for complete privacy and local-first AI workflows.
Original Work:
- Source: Google Gemini CLI
- Copyright: 2025 Google LLC
- License: Apache License 2.0
- Attribution: Original Gemini CLI code is Copyright Google LLC
Derivative Work:
- Trust CLI: Copyright 2025 Audit Risk Media LLC
- Modifications: Complete replacement of cloud APIs with local model inference, addition of privacy features, model management, performance monitoring, and comprehensive test suite
- License: Apache License 2.0 (same as original)
For detailed attribution and list of modifications, see NOTICE file.
We're grateful for the excellent foundation provided by Google's original Gemini CLI project.
If you're getting OAuth authentication errors when expecting local model inference, you may have legacy settings from an earlier version. Fix this by updating your authentication mode:
# Quick fix - update settings to use local models
echo '{
"selectedAuthType": "huggingface",
"theme": "GitHub"
}' > ~/.gemini/settings.jsonAlternative manual fix:
# Edit your settings file
nano ~/.gemini/settings.json
# Change "oauth-personal" to "huggingface"
# Change: "selectedAuthType": "oauth-personal"
# To: "selectedAuthType": "huggingface"After updating, restart trust-cli and it should use local models instead of trying to authenticate with Google.
This usually means you need to download a model first:
# Download the lightweight model
node bundle/trust.js model download qwen2.5-1.5b-instruct
# Switch to the downloaded model
node bundle/trust.js model switch qwen2.5-1.5b-instruct
# Verify the model is loaded
node bundle/trust.js model listIf you're only seeing 4 models instead of 6 after pulling updates, this is usually a caching issue:
Step 1: Clear Trust CLI Cache
# Clear all trust-cli cached data
rm -rf ~/.trustcli
# Also clear legacy cache if it exists
rm -rf ~/.geminiStep 2: Force Clean Rebuild (if cache clearing doesn't work)
# Force clean everything
rm -rf node_modules
rm -rf packages/*/node_modules
rm -rf packages/*/dist
rm -rf bundle
rm -f package-lock.json
rm -f packages/*/package-lock.json
# Fresh install and build
npm install
npm run build
npm run bundleStep 3: Update Your Alias
# Test with direct path first
node bundle/trust.js model list
# Update your alias to current directory
alias trust="node $(pwd)/bundle/trust.js"
# Make permanent by adding to ~/.zshrc or ~/.bashrc
echo 'alias trust="node /full/path/to/your/trust-cli/bundle/trust.js"' >> ~/.zshrc
source ~/.zshrcStep 4: Verify All 10 Models Appear You should see all models across 6 different formats:
- Qwen: qwen2.5-1.5b-instruct (1.5B, 2GB)
- Gemma: gemma-2-2b-instruct (2.6B, 3GB), gemma-2-9b-instruct (9B, 8GB)
- Phi: phi-3.5-mini-instruct (3.8B, 3GB), phi-3.5-mini-uncensored (3.8B, 3GB)
- Llama: llama-3.2-3b-instruct (3B, 4GB), llama-3.1-8b-instruct (8B, 8GB)
- Mistral: mistral-7b-instruct (7B, 6GB), mistral-nemo-12b-instruct (12B, 10GB)
- DeepSeek: deepseek-r1-distill-7b (7.6B, 6GB)
If you encounter TypeScript build errors on newer Node.js versions:
# Pull the latest fixes
git pull origin main
# Rebuild
npm run build
npm run bundle- Documentation: docs/
- Issues: GitHub Issues
- Discussions: GitHub Discussions
Trust: An Open System for Modern Assurance
Built with β€οΈ for privacy, transparency, and local-first AI.