AI CLI is a Rust application that acts as a provider-agnostic AI assistant within a sandboxed multi-platform terminal environment. It supports any OpenAI-compatible API to assist with coding tasks, file operations, online searches, email sending, and shell commands. The application takes initiative to provide solutions, execute commands, and analyze results without explicit user confirmation, unless the action is ambiguous or potentially destructive.
- Chat Interface: Provides a command-line interface for interacting with AI models.
- Provider Agnostic: Works with any OpenAI-compatible API (Google Gemini, OpenAI, local LLMs, etc.).
- Tool Execution: Executes system commands using the
execute_commandfunction, allowing the AI to interact with the file system and other system utilities. - Online Search: Performs online searches using the
search_onlinefunction, enabling the AI to retrieve up-to-date information from the web. - Email Sending: Sends emails using the
send_emailfunction, allowing the AI to send notifications or reports. - Conversation History: Maintains a conversation history to provide context for the AI model.
- Ctrl+C Handling: Gracefully shuts down the application and cleans up resources when Ctrl+C is pressed.
src/main.rs: Application entry point, argument parsing, and interactive loop.src/config.rs: Configuration loading from~/.aicli.confand environment variables (prefixed withAICLI_).src/chat.rs: LLM API client with conversation history management, retry logic, and tool definitions.src/tools.rs: Tool call dispatch, response display with Markdown rendering, and output normalization.src/search.rs: Online search functionality using the Tavily Search API.src/command.rs: System command execution with sandboxing (bubblewrap on Linux).src/email.rs: Email sending functionality with SMTP support.src/alpha_vantage.rs: Integration with the Alpha Vantage API for financial data.src/file_edit.rs: File editing capabilities (read, write, search, search and replace, apply diff) with path validation.src/scrape.rs: URL content scraping with summarization.src/shell.rs: Shell detection and interactive shell mode.src/sandbox.rs: Sandbox root directory management.src/patch.rs: Patch/diff application utility.src/http.rs: Shared async HTTP client.src/utils.rs: Shared utilities (logging, text summarization, retry, user confirmation).
To run AI CLI, you need to set up a .aicli.conf file in your home directory with the following variables:
# AI Provider Configuration (Required)
API_BASE_URL=https://generativelanguage.googleapis.com
API_VERSION=v1beta
MODEL=gemini-1.5-flash
API_KEY=your_api_key_here
SMTP_SERVER_IP=localhost
SMTP_USERNAME=
SMTP_PASSWORD=
DESTINATION_EMAIL=
SENDER_EMAIL=
TAVILY_API_KEY=
ALPHA_VANTAGE_API_KEY=
API_BASE_URL=https://generativelanguage.googleapis.com
API_VERSION=v1beta
MODEL=gemini-1.5-flash
API_KEY=your_gemini_api_key_hereAPI_BASE_URL=https://api.openai.com
API_VERSION=v1
MODEL=gpt-4
API_KEY=sk-your_openai_api_key_hereAPI_BASE_URL=http://localhost:11434
API_VERSION=v1
MODEL=llama3
API_KEY=API_BASE_URL=https://your-provider.com
API_VERSION=v1
MODEL=your-model-name
API_KEY=your_api_key_hereAPI_BASE_URL: The base URL of the AI provider's API endpointAPI_VERSION: The API version to use (e.g., v1, v1beta)MODEL: The model name to use (e.g., gemini-2.5-flash, gpt-4, llama3)API_KEY: Your API key for authenticationSMTP_SERVER_IP: The IP address or hostname of the SMTP server (defaults to localhost if not specified)SMTP_USERNAME: Username for SMTP authentication (optional, required for non-localhost servers)SMTP_PASSWORD: Password for SMTP authentication (optional, required for non-localhost servers)DESTINATION_EMAIL: The email address to which thesend_emailfunction will send emailsSENDER_EMAIL: The email address to use as the sender (optional, defaults to DESTINATION_EMAIL)TAVILY_API_KEY: Your API key for the Tavily Search APIALPHA_VANTAGE_API_KEY: Your API key for the Alpha Vantage API- Environment variables can override config file values by prefixing with
AICLI_. For example,AICLI_API_KEYoverridesAPI_KEY,AICLI_MODELoverridesMODEL.
-
Clone the repository:
git clone <repository_url> cd ai-cli
-
Create a
.aicli.conffile in your home directory and set the required environment variables as described in the Configuration Setup section. -
Run the application:
cargo run
-
Chat with the AI by typing messages in the command-line interface. Use
!commandto run shell commands directly (e.g.,!lsor!dir). Typeexitto quit orclearto reset the conversation.
If you were using the previous version, you can migrate your configuration:
-
Rename your existing
.gemini.confto.aicli.conf:mv ~/.gemini.conf ~/.aicli.conf
-
Add the new required fields to your
.aicli.conf:API_BASE_URL=https://generativelanguage.googleapis.com API_VERSION=v1beta MODEL=gemini-1.5-flash
-
Keep your existing
API_KEY(renamed fromGEMINI_API_KEY)
AI CLI is designed to work with any OpenAI-compatible API. The following providers have been tested:
- Google Gemini: Full support with tool calling
- OpenAI: Full support with tool calling
- Local LLMs (Ollama): Basic support (may require adjustments for tool calling)
- Uses Bearer header authentication via the
async-openaicrate - Endpoint format:
{base_url}/{version}/chat/completions - Full tool calling support
- Uses header authentication (
Authorization: Bearer API_KEY) - Endpoint format:
{base_url}/{version}/chat/completions - Full tool calling support
- May not require authentication
- Endpoint format:
{base_url}/{version}/chat/completions - Tool calling support varies by model
Run with the --debug flag to log configuration details and API call information to debug.log:
cargo run -- --debugThis will log to debug.log in the current directory:
- AI provider configuration (API base URL, version, model, masked API key)
- API endpoint being used
- SMTP configuration (server, credentials, email addresses)
- All LLM API calls and responses
- Tool calls and their results
- Command execution and output
- Ensure
~/.aicli.confexists and contains the required fields - Check that your API key is valid and has the correct format
- Verify the API base URL is correct for your provider
- Check your internet connection
- Verify the API endpoint is accessible
- Ensure your API key has sufficient credits/permissions
- Some providers may have limited tool calling support
- Check the provider's documentation for compatibility
- Try using a different model if available
Contributions are welcome! Please feel free to submit pull requests or open issues for bugs and feature requests.
MIT License