Intelligent Question answering for your terminal
Real-time web search powered by AI β’ Beautiful CLI experience β’ Smart caching
Installation β’ Quick Start β’ Features β’ Documentation
IQ combines live web search with LLM synthesis to deliver comprehensive, cited answers directly in your terminal. Think Perplexity, but in your CLI.
$ iq "What are the latest developments in quantum computing?"# Clone the repository
git clone https://github.com/vowalsh/iq-cli.git
cd iq-cli
# Install dependencies
pip install -r requirements.txt
# Install globally
pip install -e .
# Set up API keys
cp .env.example .env
# Edit .env with your API keys- SerpAPI - Get free key
- LLM Provider (choose one):
- OpenAI - Get API key
- Anthropic Claude - Get API key
- Google Gemini - Get API key
- OpenRouter - Get API key (access multiple models)
- Perplexity - Get API key
- Ollama - Install locally (no API key needed, runs locally)
# Interactive mode
iq
# Ask a question
iq "What is the capital of France?"
# Random interesting question
iq --random
# Use cached answers
iq --use-cache "Python best practices"
# Get help
iq --helpReal-time results using SerpAPI with configurable result counts
Supports 6 LLM providers: OpenAI, Claude, Gemini, OpenRouter, Perplexity, Ollama (local) with streaming
Automatic chart generation for numeric/statistical queries
- Intelligent similarity matching (95%+ threshold)
- Automatic Q&A storage with timestamps
- Full-text search across cached content
- 30-day expiration (configurable)
- Rich terminal formatting with colors
- Organized help system
- Interactive mode with command support
- Inline citations [1], [2], [3]...
Minimal dependencies, maximum performance
| Command | Description |
|---|---|
iq |
Start interactive mode |
iq "query" |
Ask a single question |
iq --help |
Show help screen |
iq --version |
Show version |
iq --random |
Get random question |
| Command | Description |
|---|---|
iq --cache-list |
List recent Q&As |
iq --cache-search "term" |
Search cache |
iq --cache-stats |
Show statistics |
iq --cache-clear |
Clear all entries |
iq --use-cache |
Enable retrieval |
iq --force-refresh |
Skip cache |
| Flag | Description |
|---|---|
--results N |
Number of search results (default: 8) |
--model MODEL |
LLM model (gpt-3.5-turbo, gpt-4, claude-3-sonnet, etc.) |
--no-streaming |
Show complete answer at once |
--no-cache |
Disable caching |
--no-color |
Disable colored output |
--verbose |
Enable debug output |
When you run iq without arguments, you enter interactive mode:
β Your question: /cache list # List recent entries
β Your question: /cache search python # Search for "python"
β Your question: /cache stats # Show statistics
β Your question: quit # Exit
Location: ~/.iq_cache/qa_cache.json
Features:
- Automatic storage of all Q&A pairs
- Fuzzy matching for similar questions
- Configurable expiration (30 days)
- Size limit (1000 entries)
- Automatic cleanup
Behavior:
- Storage: Always enabled (unless
--no-cache) - Retrieval: Requires
--use-cacheflag - Override: Use
--force-refreshfor fresh results
IQ automatically generates charts for queries with:
- Percentages (market share, growth rates)
- Currency values (GDP, revenue)
- Statistical data (populations, rankings)
- Comparative numbers
Example queries:
iq "Compare GDP of G7 countries"
iq "Smartphone market share by company"
iq "Top programming languages by popularity"iq.py # Main CLI interface
search.py # SerpAPI integration
llm.py # OpenAI synthesis
formatting.py # Terminal output
cache.py # Q&A caching system
charts.py # Data visualization
Edit .env with your API keys:
# Required
SERPAPI_KEY=your_serpapi_key_here
# LLM Provider (choose one or more)
OPENAI_API_KEY=your_openai_api_key_here # For GPT models
ANTHROPIC_API_KEY=your_anthropic_api_key_here # For Claude models
GOOGLE_API_KEY=your_google_api_key_here # For Gemini models
OPENROUTER_API_KEY=your_openrouter_api_key # Access to multiple models
PERPLEXITY_API_KEY=your_perplexity_api_key # For Perplexity models
# Optional: Set default model (defaults to gpt-3.5-turbo)
IQ_MODEL=gpt-3.5-turboOpenAI:
gpt-3.5-turbo(default, fast & cost-effective)gpt-4(most capable)gpt-4-turbo
Anthropic Claude:
claude-3-haiku-20240307(fastest)claude-3-sonnet-20240229(balanced)claude-3-opus-20240229(most capable)
Google Gemini:
gemini-pro(fast & capable)gemini-1.5-pro(most capable)gemini-1.5-flash(fastest)
OpenRouter (via --model flag with provider=openrouter):
anthropic/claude-3-sonnetmeta-llama/llama-3-70b-instructgoogle/gemini-pro- Many more at openrouter.ai/models
Perplexity:
pplx-7b-online(fast, with web access)pplx-70b-online(most capable, with web access)
Ollama (local models, no API key needed):
llama2- Meta's Llama 2mistral- Mistral 7Bmixtral- Mixtral 8x7Bcodellama- Code Llamaphi- Microsoft Phi- Install from ollama.ai
- Python 3.7+
- Internet connection
- SerpAPI account (free tier available)
- OpenAI API account
Dependencies:
requests- HTTP requestsopenai- OpenAI API clientanthropic- Anthropic Claude API client (optional)google-generativeai- Google Gemini API client (optional)colorama- Cross-platform colorspython-dotenv- Environment variablesrich- Beautiful terminal formatting
iq "Latest developments in AI?"iq --use-cache --results 5 "Python best practices"iq --cache-search "machine learning"iq
β Your question: What is quantum computing?
# ... answer appears ...
β Your question: /cache stats
# ... cache statistics ...
β Your question: quitMIT License - see LICENSE file for details.
Made by @vowalsh
β Star on GitHub β’ π Report Bug β’ π‘ Request Feature