Local AI. On Your Terms.
Molten is a privacy-first macOS, iOS, and iPadOS app that runs local LLMsβOllama, Swama, or Apple Foundation Modelsβcompletely offline, completely yours.
β
Mac-first native app - Not a web wrapper like Open WebUI
β
Multi-backend support - Ollama + Swama + Apple Models in one app
β
Privacy obsessed - Local-only by design, not bolted-on
β
MLX optimized - Leverage Apple Silicon for speed
β
Indie positioning - No corporate baggage = trust
Molten is a native Apple-platform application for macOS, iOS, and iPadOS. It provides an elegant, ChatGPT-like interface for interacting with locally hosted language models through multiple backends:
- Ollama - The popular local LLM runtime
- Swama - MLX-based inference engine optimized for Apple Silicon
- Apple Foundation Models - Native on-device models (macOS 26.0+)
All processing happens locally on your device. No data leaves your device. Ever.
- Multi-Provider Support: Seamlessly switch between Ollama, Swama, and Apple Foundation Models
- Streaming Responses: Real-time streaming of model responses for instant feedback
- Conversation Management: Persistent conversation history with SwiftData
- Model Selection: Unified model picker showing all available models from all providers
- Performance Analytics: Detailed metrics showing prompt eval rate, eval rate, and throughput
- Native Apple Design: Built with SwiftUI, feels at home on macOS, iOS, and iPadOS
- Markdown Rendering: Beautiful rendering of code blocks, tables, and formatted text
- Syntax Highlighting: Powered by Splash for code blocks
- Dark/Light Mode: System-aware color schemes
- Keyboard Shortcuts: macOS-native keyboard shortcuts (ββ₯K for panel mode)
- Floating Panel: Quick access panel mode for quick interactions
- Voice Input: Speech-to-text for voice prompts
- Text-to-Speech: Read aloud functionality with system voices
- Multimodal Support: Text and image inputs supported
- 100% Local: All processing happens on your device
- No Telemetry: No tracking, no analytics, no data collection
- Offline-First: Works completely offline once models are loaded
- Open Source: Full source code available for audit
Molten follows a clean architecture pattern with clear separation of concerns:
- ModelProviderProtocol: Unified interface for all model providers
- OllamaService: Handles communication with Ollama API
- SwamaService: Handles communication with Swama API (OpenAI-compatible)
- AppleFoundationService: Interface for Apple Foundation Models
- SwiftDataService: Actor-based data persistence
- SpeechService: Text-to-speech functionality
- HapticsService: Haptic feedback (iOS)
- Clipboard: Cross-platform clipboard access
- ConversationStore: Manages conversations, messages, and streaming
- LanguageModelStore: Manages available language models from all providers
- CompletionsStore: Manages custom completion templates
- AppStore: Global app state and reachability
- SwiftData Models:
ConversationSD,MessageSD,LanguageModelSD,CompletionInstructionSD - API Models:
ChatMessage,ChatCompletionRequest/Response,ContentType
- Platform-Specific Views: Separate implementations for macOS and iOS
- Shared Components: Reusable UI components across platforms
- SwiftUI + @Observable: Modern reactive UI framework
- SwiftData Integration: Automatic UI updates from data changes
- macOS 14.0+, iOS 17.0+, iPadOS 17.0+
- Apple Silicon Mac (M1, M2, M3, or later) - Required for Apple Foundation Models
- Xcode 15.0+ (for building from source)
- At least one backend running:
- Ollama (optional)
- Swama (optional)
- Apple Foundation Models (built-in on macOS 26.0+)
Download the latest release from the Releases page.
-
Clone the repository
git clone https://github.com/OnDemandWorld/molten.git cd molten -
Open in Xcode
open Molten.xcodeproj
-
Build and Run
- Select the "Molten" scheme
- Choose your target device (Mac)
- Press βR to build and run
-
Install Ollama (if not already installed)
brew install ollama # or download from https://ollama.ai -
Start Ollama
ollama serve
-
Pull a model
ollama pull llama2
-
Configure in Molten
- Open Settings (β,)
- Go to "Ollama" section
- Enter server URI (default:
http://localhost:11434) - Optional: Add Bearer Token if using remote Ollama
- Models will auto-populate
-
Install Swama (if not already installed)
# Follow Swama installation instructions # https://github.com/Trans-N-ai/swama
-
Start Swama
swama serve
-
Configure in Molten
- Open Settings (β,)
- Go to "Swama" section
- Enter server URI (default:
http://localhost:28100)
- Optional: Add Bearer Token
- Models will auto-populate
Apple Foundation Models are built-in on macOS 26.0+ and require no setup. They will automatically appear in the model list if available on your system.
- Select a Model: Click the model selector in the header to choose from available models
- Type a Message: Enter your prompt in the text field
- Send: Press ββ© or click Send
- View Analytics: Check the footer below each assistant message for performance metrics
- ββ©: Send message
- ββ₯K: Toggle panel mode
- β,: Open Settings
- βN: New conversation
- βK: Focus search (in sidebar)
Access Settings via β, or the menu bar:
-
General Settings
- Default Model: Choose your preferred model
- System Prompt: Set default behavior for new conversations
- Ping Interval: How often to check provider availability
- macOS default: 15 seconds
- iOS/iPadOS default: 30 seconds (optimized for battery life)
-
Provider Settings
- Configure Ollama server URI and Bearer Token
- Default:
http://localhost:11434(auto-detected if not configured) - Leave empty to disable Ollama checking
- Default:
- Configure Swama server URI and Bearer Token
- Default:
http://localhost:28100(auto-detected if not configured) - Leave empty to disable Swama checking
- Default:
- Connection status indicators
- Smart Polling: The app uses intelligent backoff strategies:
- Default localhost: Aggressive backoff (30s β 5min) when unreachable
- User-configured URLs: Moderate backoff (10s β 60s) when unreachable
- Results cached for 10 seconds to minimize network requests
- Configure Ollama server URI and Bearer Token
-
App Settings
- Appearance: Light/Dark/System
- Voice: Text-to-speech voice selection
- Initials: Your initials for chat display
- Vibrations: Haptic feedback (iOS)
Each completed assistant message shows:
- Prompt Eval Rate: How fast the model processes input (tokens/s)
- Eval Rate: How fast the model generates output (tokens/s)
- Overall Throughput: Total tokens per second
- Total Tokens: Prompt + completion tokens
- Total Time: End-to-end response time
Molten/
βββ Application/
β βββ MoltenApp.swift # Main app entry point
βββ Services/
β βββ ModelProviderProtocol.swift # Unified provider interface
β βββ OllamaService.swift # Ollama API client
β βββ SwamaService.swift # Swama API client
β βββ AppleFoundationService.swift # Apple Foundation Models
β βββ SwiftDataService.swift # Data persistence
β βββ SpeechService.swift # Text-to-speech
β βββ ...
βββ Stores/
β βββ ConversationStore.swift # Conversation management
β βββ LanguageModelStore.swift # Model management
β βββ CompletionsStore.swift # Completion templates
β βββ AppStore.swift # Global app state
βββ SwiftData/
β βββ Models/ # SwiftData models
βββ UI/
β βββ macOS/ # macOS-specific UI
β βββ iOS/ # iOS-specific UI
β βββ Shared/ # Shared UI components
βββ Models/ # Business logic models
βββ Helpers/ # Utility functions
βββ Extensions/ # Swift extensions
# Using Xcode
open Molten.xcodeproj
# Or using xcodebuild
xcodebuild -scheme Molten -configuration DebugThe project uses Swift Package Manager. Key dependencies:
- Splash: Syntax highlighting for code blocks
- MarkdownUI: Markdown rendering
- KeyboardShortcuts: macOS keyboard shortcuts
- ActivityIndicatorView: Loading indicators
- OllamaKit: Ollama API client
- Swift 6 language mode with strict concurrency
@Observablefor state management- Actor pattern for thread-safe operations
- Async/await for asynchronous operations
- Comprehensive inline documentation
# Run tests
xcodebuild test -scheme MoltenContributions are welcome! Please:
- Fork the repository
- Create a feature branch (
git checkout -b feature/amazing-feature) - Commit your changes (
git commit -m 'Add some amazing feature') - Push to the branch (
git push origin feature/amazing-feature) - Open a Pull Request
Please be respectful and constructive in all interactions. We're all here to build something great together.
This project is licensed under the Apache License 2.0 - see the LICENSE file for details.
Molten is based on the excellent work of the Enchanted project by Augustinas Malinauskas. We are grateful for their open-source contribution that made this project possible.
- Repository: https://github.com/gluonfield/enchanted
- Author: Augustinas Malinauskas
- License: Apache License 2.0
- Swama: MLX-based inference engine - https://github.com/Trans-N-ai/swama
- Ollama: Local LLM runtime - https://ollama.ai
- MLX: Machine learning framework for Apple Silicon - https://github.com/ml-explore/mlx
- Splash: Syntax highlighting - https://github.com/JohnSundell/Splash
- MarkdownUI: Markdown rendering - https://github.com/gonzalezreal/MarkdownUI
- Check Provider Status: Ensure the provider is running and reachable
- Verify Settings: Check server URIs in Settings
- Leave URI fields empty to disable checking for that provider
- Default localhost URLs are auto-detected if not configured
- Check Logs: Look for connection errors in Console.app
- Restart Providers: Try restarting Ollama/Swama servers
- Polling Behavior: The app uses smart backoff - if a provider is unreachable, it will check less frequently to reduce error spam
- Apple Silicon Required: Ensure you're using an Apple Silicon Mac
- Check System Resources: Monitor memory and CPU usage
- Model Size: Larger models require more resources
- Close Other Apps: Free up system resources
- Clean Build: Product β Clean Build Folder (β§βK)
- Reset Packages: File β Packages β Reset Package Caches
- Xcode Version: Ensure Xcode 15.0+ is installed
- Swift Version: Check Swift version compatibility
- Check Asset Idiom: Ensure imagesets include
universalentries (not mac-only) - Target Membership: Confirm the asset catalog is included in the iOS target
- Issues: GitHub Issues
- Discussions: GitHub Discussions
- Documentation: See ARCHITECTURE.md for detailed technical documentation
- iOS/iPadOS support
- Additional model providers
- Plugin system for custom providers
- Advanced conversation management
- Export/import conversations
- Custom themes
- More keyboard shortcuts
- Accessibility improvements
Molten - Local AI. On Your Terms. π
Made with β€οΈ for the privacy-conscious Mac user.