A comprehensive web-based dashboard for managing remote Ollama language model servers. This application provides an intuitive interface for model management, interactive chat, text generation, embeddings visualization, and server monitoring.
g023's OllamaMan - Ollama Manager is designed to simplify the management and utilization of Ollama servers running on local networks or remote machines. It offers a unified web interface that eliminates the need for command-line interactions, making AI model deployment and usage accessible to users who prefer graphical interfaces.
Ollama is an open-source platform that enables running large language models locally on your hardware. It supports various model architectures and provides a REST API for programmatic access. Ollama Manager acts as a user-friendly frontend to this API, providing features like:
- Model Management: Browse, download, organize, and create custom AI models
- Interactive Chat: Conversational interface with advanced features
- Text Generation: Single-prompt completion tasks
- Embeddings: Vector representation generation and visualization
- Server Monitoring: Real-time status and performance metrics
- Model Customization: Create personalized models with fine-tuned parameters
- Server Status Monitoring: Real-time connection status and latency metrics
- Model Statistics: Overview of installed and running models
- Storage Information: Disk usage tracking for model storage
- Quick Actions: One-click access to common operations
- Model Library: Browse available models from the connected Ollama server
- Download Management: Pull models with progress tracking
- Model Operations: Delete, copy, and inspect model details
- Search and Filter: Find models by name or capabilities
- Advanced Model Creation: Create custom models with modified parameters
- Parameter Tuning: Adjust temperature, context size, sampling settings, and more
- System Prompts: Define AI behavior and personality
- Prompt Templates: Customize prompt formatting
- Message Examples: Add conversation examples for fine-tuning
- Stop Sequences: Define custom stopping conditions
- Configuration Import/Export: Save and load model configurations
- Preview Mode: Review model settings before creation
- Custom Model Building: Create personalized AI models from existing base models
- Parameter Modification: Fine-tune model behavior with comprehensive parameter controls:
- Core Parameters: Temperature, context size, token limits
- Sampling Controls: Top-k, top-p, min-p, typical-p for generation quality
- Repetition Management: Penalty settings and sequence controls
- System Prompt Integration: Define AI personality and behavior patterns
- Template Customization: Control prompt formatting and structure
- Example Conversations: Add training examples for specialized behaviors
- Stop Sequence Management: Define custom termination conditions
- Parameter Comparison: Visual diff showing changes from source model
- Configuration Management: Import/export model configurations as JSON
- Preview & Validation: Review complete model setup before creation
- Advanced Workflows: Support for complex model configurations and fine-tuning
- Conversational AI: Full-featured chat with context preservation
- Streaming Responses: Real-time text generation for immediate feedback
- System Prompts: Customize AI behavior with predefined personas
- Parameter Tuning: Adjust temperature, token limits, and sampling settings
- Chat History: Persistent conversation storage and retrieval
- Prompt Engineering: Single-turn text completion
- Model Selection: Choose from available models
- Parameter Control: Fine-tune generation settings
- Output Formatting: Clean display with syntax highlighting
- Vector Generation: Convert text to numerical representations
- Visualization: Interactive charts showing embedding distributions
- Statistical Analysis: Mean, min/max values, and dimensionality info
- Model Compatibility: Works with embedding-specialized models
- Request Tracking: Monitor all API interactions
- Performance Metrics: Response times and error logging
- Debugging Tools: Inspect request/response data
- Log Management: Clear and filter log entries
- Side-by-Side Analysis: Compare outputs from different models
- Performance Metrics: Token counts and generation times
- A/B Testing: Evaluate model responses for specific tasks
- Command Interface: Direct access to Ollama CLI commands
- Output Display: Formatted terminal output in the web interface
- Command History: Recall and reuse previous commands
- Theme Selection: Light and dark interface modes
- Server Configuration: Dynamic Ollama host and port settings
- Default Preferences: Set default models and parameters
- Auto-Refresh: Configurable status update intervals
- Notification Controls: Manage alert preferences
- Global Navigation: Quick access to all features
- Model Search: Find models across the registry
- Command Shortcuts: Keyboard-driven navigation
- Window Management: Draggable, resizable interface windows
- Notification System: Toast-style alerts for user feedback
- Responsive Design: Adapts to different screen sizes
- Accessibility: Keyboard navigation and screen reader support
- Web Server: Apache, Nginx, or any PHP-compatible server
- PHP Version: 7.4 or higher
- PHP Extensions: cURL (required for API communication)
- Ollama Server: Running instance accessible via HTTP
- Browser: Modern web browser with JavaScript enabled
-
Download the Application
# Clone or download to your web server's document root # Example paths: /var/www/html/ollama-manager/ # or for local development: C:\xampp\htdocs\ollama-manager\ -
Configure Server Access
Edit
api/config.phpto set your Ollama server details:define('OLLAMA_HOST', '192.168.1.100'); // Your Ollama server IP define('OLLAMA_PORT', '11434'); // Default Ollama port
-
Server Configuration
Ensure your Ollama server accepts remote connections:
# On the Ollama server machine OLLAMA_HOST=0.0.0.0 ollama serve -
Access the Interface
Open your web browser and navigate to:
http://localhost/ollama-manager/
Ensure the web server can write to the data/ directory for storing settings and logs:
chmod 755 data/The application uses SQLite for data storage. The database is automatically created when first accessed. For manual initialization or migration from older JSON-based storage:
# Run the database initialization script
php api/init_db.phpThis will create the database schema and migrate any existing JSON data to SQLite.
The application connects to Ollama via HTTP API. Key configuration options in api/config.php:
OLLAMA_HOST: IP address or hostname of the Ollama serverOLLAMA_PORT: Port number (default: 11434)OLLAMA_TIMEOUT_*: Various timeout settings for different operations
Accessible through the Settings window or api/settings.php:
- Server Settings:
- Ollama Host: IP address or hostname of your Ollama server
- Ollama Port: Port number (default: 11434)
- Theme: Interface appearance (light/dark)
- Default Model: Pre-selected model for chat and generation
- Auto-refresh Interval: Status update frequency
- Notification Preferences: Alert display settings
- Verify Connection: Check the server status indicator in the top menu bar
- Load Models: Use the Model Manager to browse and download models
- Start Chatting: Open the Chat window and select a model
- Explore Features: Try text generation and embeddings playground
- Navigate to Model Manager window
- Browse or search for models
- Click "Pull Model" to download
- Monitor progress in the interface
- Select a model to view details
- See size, modification date, and capabilities
- Copy or delete models as needed
- Open the Model Creator window
- Select a source model as the base
- Modify parameters using the intuitive interface:
- Adjust core parameters (temperature, context size, etc.)
- Configure sampling settings for generation quality
- Set repetition controls to avoid loops
- Add stop sequences for precise control
- Define system prompts for AI personality
- Customize prompt templates
- Add example conversations for fine-tuning
- Preview the complete configuration
- Create the model with one click
- Import/export configurations for reuse
- Select a model from the dropdown
- Type messages in the input field
- Press Enter or click send button
- Responses appear in real-time with streaming
- System Prompts: Set AI behavior with custom instructions
- Parameters: Adjust creativity and response length
- Image Input: Upload images for vision-capable models
- Tool Calling: Enable function calling for enhanced capabilities
- Structured Output: Generate JSON responses with schemas
- Conversations are automatically saved
- Access previous chats from the sidebar
- Search through chat history
- Export conversations for external use
- Choose a model and enter a prompt
- Adjust generation parameters
- Generate and view formatted output
- Useful for creative writing, code generation, etc.
- Select an embedding model
- Input text for vector conversion
- View statistical properties
- Visualize embedding distributions
- Real-time connection monitoring
- Performance metrics display
- Running model tracking
- View all API requests and responses
- Debug failed operations
- Monitor usage patterns
The application provides a REST API for programmatic access. All endpoints return JSON responses.
{
"success": true|false,
"data": { ... } | null,
"error": "message" | null
}| Endpoint | Method | Description |
|---|---|---|
/api/status.php |
GET | Server status and statistics |
/api/models.php?action=list |
GET | List available models |
/api/models.php?action=show&model=name |
GET | Get model details |
/api/models.php?action=pull |
POST | Download a model |
/api/models.php?action=delete |
POST | Remove a model |
/api/models.php?action=copy |
POST | Duplicate a model |
/api/models.php?action=create_advanced |
POST | Create custom model with parameters |
/api/chat.php |
POST | Send chat message |
/api/generate.php |
POST | Generate text completion |
/api/embed.php |
POST | Create embeddings |
/api/settings.php?action=get |
GET | Retrieve settings |
/api/settings.php?action=save |
POST | Update settings |
/api/history.php?action=list |
GET | Get chat history |
/api/logs.php?action=list |
GET | View API logs |
fetch('api/chat.php', {
method: 'POST',
headers: { 'Content-Type': 'application/json' },
body: JSON.stringify({
model: 'llama2',
messages: [{ role: 'user', content: 'Hello!' }]
})
})
.then(response => response.json())
.then(data => console.log(data));The application follows a client-server architecture:
- Frontend: HTML, CSS, JavaScript (jQuery)
- Backend: PHP API layer
- Database: SQLite for data persistence
- Styling: Custom CSS with theme support
ollama-manager/
βββ index.php # Main application interface
βββ api/ # Backend API endpoints
β βββ config.php # Configuration constants
β βββ ollama.php # Ollama API wrapper
β βββ database.php # Data persistence layer
β βββ models.php # Model management and creation API
β βββ *.php # Individual API handlers
βββ assets/ # Static resources
β βββ css/ # Stylesheets (Aqua UI theme)
β βββ js/ # JavaScript logic (Model Creator, Chat, etc.)
β βββ images/ # Interface graphics
βββ data/ # Application data storage
βββ uploads/ # File upload directory
- PHP: Server-side logic and API implementation
- SQLite: Lightweight database for settings and history
- jQuery: DOM manipulation and AJAX communication
- Chart.js: Data visualization (embeddings)
- Highlight.js: Syntax highlighting for code
- Marked.js: Markdown rendering
- Create API endpoint in
api/directory - Add frontend JavaScript in
assets/js/app.js - Update HTML interface in
index.php - Add database schema changes if needed
- Modify CSS variables in
assets/css/ - Create new theme files following existing patterns
- Update theme selection in settings
This project is licensed under the BSD 3-Clause License - see the LICENSE file for details.
Copyright (c) 2025, g023
g023
- Built for the Ollama community
- Inspired by the need for accessible AI model management
Enjoy managing your AI models with Ollama Manager! π¦β¨

