A modern, flexible web-based chat interface for Large Language Models (LLMs) that supports multiple AI providers and offers a ChatGPT-like experience.
- Ollama - Local AI models
- OpenRouter - Access to various AI models via API
- Custom APIs - Connect to your own AI endpoints
- Real-time streaming responses
- Conversation history with persistent storage
- Markdown rendering with syntax highlighting
- Image upload support for vision-capable models
- Responsive design for desktop and mobile
- Dark theme interface
- Provider switching without losing conversations
- Model selection per provider
- Connection testing for all providers
- Message metadata (provider, model, response time, tokens)
- Code highlighting with copy-to-clipboard functionality
- Chat deletion with confirmation modal
Visit the live demo at: https://astrixity.github.io/LLMe (Note: Ollama requires local setup due to CORS)
- Clone or download this repository
- Open
index.htmlin your browser - Configure your AI provider in settings
Choose one of the following AI providers:
# Install Ollama
curl -fsSL https://ollama.ai/install.sh | sh
# Pull a model
ollama pull llama2
# For web access, set CORS headers
export OLLAMA_ORIGINS="*"
ollama serve- Sign up at OpenRouter
- Get your API key from openrouter.ai/keys
- Add credit to your account
- Ensure your API endpoint supports OpenAI-compatible format
- Have your API key ready (if required)
Base URL: http://localhost:11434 (default)
Models: Automatically loaded from your local Ollama installation
Base URL: https://openrouter.ai/api/v1 (default)
API Key: Your OpenRouter API key
Models: Automatically loaded from OpenRouter
Base URL: Your custom API endpoint
API Key: Your API key (optional)
Model: Your model identifier
If using the hosted version with Ollama, you may encounter CORS errors. Solutions:
- Recommended: Download and run locally
- Configure Ollama CORS:
export OLLAMA_ORIGINS="https://astrixity.github.io" ollama serve
- Local development:
export OLLAMA_ORIGINS="*" ollama serve
To access Ollama running on a different PC with HTTPS support, use Cloudflare Tunnel:
# Install cloudflared
# Windows: Download from https://github.com/cloudflare/cloudflared/releases
# macOS: brew install cloudflare/cloudflare/cloudflared
# Linux: Follow instructions at https://github.com/cloudflare/cloudflared
# Configure Ollama with open CORS
export OLLAMA_ORIGINS="*"
ollama serve
# In another terminal, create tunnel
cloudflared tunnel --url http://localhost:11434This will give you an HTTPS URL like https://random-name.trycloudflare.com that you can use as your Ollama Base URL in LLMe settings.
# Configure Ollama to bind to all interfaces
export OLLAMA_HOST="0.0.0.0:11434"
export OLLAMA_ORIGINS="*"
ollama serve
# Use the machine's IP address
# Example: http://192.168.1.100:11434Note: This only works with the HTTP version of LLMe due to Mixed Content restrictions.
LLMe/
βββ index.html # Main HTML file
βββ styles.css # Styling and layout
βββ script.js # Main application logic
βββ favicon.ico # Site icon
βββ Liter-Regular.ttf # Custom font
βββ README.md # This file
βββ docs/ # Documentation files
βββ MARKDOWN_SUPPORT.md
βββ VISION_SUPPORT.md
βββ STREAMING_FIX.md
βββ METADATA_FIX.md
- New Chat button
- Chat History with delete options
- User Profile section
- Model Selector with refresh option
- Message Display with markdown rendering
- Typing Indicators during responses
- Image Display for uploaded files
- Multi-line text input with auto-resize
- Image Upload button
- Settings modal access
- Send button with state management
POST /api/chat
{
"model": "llama2",
"messages": [...],
"stream": true
}POST /chat/completions
{
"model": "openai/gpt-3.5-turbo",
"messages": [...],
"stream": true
}Compatible with OpenAI format or similar streaming APIs.
- Local Storage: Conversations stored in browser's localStorage
- No Data Collection: No analytics or tracking
- API Keys: Stored locally, never transmitted except to chosen provider
- HTTPS Ready: Works with secure connections
- Clear browser cache: Ctrl+F5 (Windows) or Cmd+Shift+R (Mac)
- Check file: Ensure
favicon.icoexists in root directory - Hard refresh: Close tab and reopen
Error: Access to fetch at 'http://localhost:11434' blocked by CORS policy
Solution: Configure Ollama CORS headers or run locally
Mixed Content: The page was loaded over HTTPS, but requested an insecure resource 'http://...'
Solutions:
- Use Cloudflare Tunnel: Creates HTTPS endpoint for local Ollama
- Download and run locally: Bypasses HTTPS restrictions
- Use OpenRouter/Custom API: Already HTTPS-compatible
- Access via HTTP: Use
http://astrixity.github.io/LLMe/(less secure)
- Check connection: Use "Test Connection" button
- Verify credentials: Ensure API keys are correct
- Check provider status: Ensure service is running
- Network: Check internet connection
- Provider: Verify provider supports streaming
- Browser: Try different browser or incognito mode
ChatAppclass: Main application controller- Event-driven: Uses addEventListener for user interactions
- Modular design: Separate methods for each feature
- Error handling: Comprehensive try-catch blocks
sendMessage(): Handles message sending and AI responsesrenderMessages(): Updates chat display with markdownhandleStreamingResponse(): Processes real-time AI responsesloadModelsForCurrentProvider(): Fetches available models
- Themes: Modify
styles.cssfor different color schemes - Providers: Add new providers in the settings modal
- Features: Extend functionality in
script.js
- β Chrome 80+
- β Firefox 75+
- β Safari 13+
- β Edge 80+
- Fork the repository
- Create a feature branch
- Make your changes
- Test thoroughly
- Submit a pull request
This project is licensed under the MIT License - see the LICENSE file for details.
- Marked.js - Markdown parsing
- Highlight.js - Code syntax highlighting
- Font Awesome - Icons
- Ollama Team - Local AI model serving
- OpenRouter - AI model API access
- Issues: Report bugs or request features via GitHub Issues
- Documentation: Check the
docs/folder for detailed guides - Community: Join discussions in the repository
Made with β€οΈ for the AI community
π Star this repo | π Report bugs | π‘ Request features