A powerful OpenCloud web app that integrates local Large Language Models (LLMs) directly into your OpenCloud instance for AI-powered assistance.
- All data stays in your browser - no cloud APIs required
- Support for Ollama, LM Studio, vLLM, and OpenAI-compatible endpoints
- Persistent conversation history stored locally in browser storage
- Real-time chat interface
- Multiple LLM configurations (with default selection)
- Conversation history tracking
- Configurable temperature, max tokens, and system prompts
- Test connection tool
- Support for multiple models
- Product description generation
- Email drafting assistance
- Document summarization
- Data analysis and insights
- General productivity automation
- OpenCloud instance
- LLM Server: One of the following:
- Local LLM (Ollama, LM Studio, vLLM)
- Remote OpenAI-compatible API (OpenAI, Azure OpenAI, Together.ai, custom endpoints)
- Any OpenAI-compatible chat completions endpoint
-
Install Ollama from ollama.ai
-
Pull a model (e.g., Llama 3.2):
ollama pull llama3.2
-
Ollama automatically runs on
http://localhost:11434 -
In the app settings, use:
- API URL:
http://localhost:11434/v1/chat/completions - API Token:
ollama - Model Name:
llama3.2
- API URL:
- Download LM Studio from lmstudio.ai
- Download a model from the LM Studio interface
- Start the local server in LM Studio (usually runs on
http://localhost:1234) - In the app settings, configure accordingly
Any server that implements the OpenAI chat completions API will work, including remote HTTPS endpoints.
Examples:
- OpenAI API:
https://api.openai.com/v1/chat/completions - Azure OpenAI:
https://your-resource.openai.azure.com/openai/deployments/your-deployment/chat/completions?api-version=2024-02-15-preview - Together.ai:
https://api.together.xyz/v1/chat/completions - Custom self-hosted endpoints:
https://your-server.com/v1/chat/completions - Any other OpenAI-compatible API
IMPORTANT: OpenCloud enforces Content Security Policy (CSP) which blocks browser connections to external APIs by default. You must configure CSP to allow connections to your LLM endpoints.
Create a file csp.yaml in your OpenCloud config directory with the following content:
directives:
connect-src:
- '''self'''
- '*'This configuration allows the browser to connect to any external endpoint. If you want to be more restrictive, you can specify only your LLM endpoints:
directives:
connect-src:
- '''self'''
- 'http://localhost:11434' # Ollama
- 'http://localhost:1234' # LM Studio
- 'https://api.openai.com' # OpenAI
# Add other endpoints as neededWhen running OpenCloud with Docker, mount the CSP configuration file and set the environment variable:
docker run --name opencloud -d \
-p 9200:9200 \
-v $HOME/opencloud/opencloud-config:/etc/opencloud \
-v $HOME/opencloud/opencloud-data:/var/lib/opencloud \
-v $HOME/opencloud/opencloud-apps:/var/lib/opencloud/web/apps \
-v $HOME/opencloud/opencloud-apps-web:/var/lib/opencloud/web/assets/apps \
-e OC_URL=https://localhost:9200 \
-e PROXY_CSP_CONFIG_FILE_LOCATION=/etc/opencloud/csp.yaml \
opencloudeu/opencloud-rolling:latestMake sure your csp.yaml file is in $HOME/opencloud/opencloud-config/ directory.
For non-Docker deployments, consult the OpenCloud CSP documentation for how to configure CSP in your specific setup.
-
Install pnpm if you haven't already.
Correct version: Our
package.jsonholds apackageManagerfield. Please make sure that you have at least the same major version of pnpm installed. -
Install dependencies:
pnpm install
-
Build the app:
pnpm build
-
The built app will be in the
distfolder, ready to deploy to your OpenCloud instance.
Copy the contents of the dist folder to your OpenCloud web apps directory. See the OpenCloud app deployment documentation for more details.
Important: Make sure your LLM server is accessible from your browser before using the app!
-
Click the Settings button in the sidebar
-
Click "Add Configuration"
-
Fill in the details:
- Name: A friendly name for this configuration
- API URL: The endpoint URL
- Local Ollama:
http://localhost:11434/v1/chat/completions - Local LM Studio:
http://localhost:1234/v1/chat/completions - Remote endpoint:
https://your-api.com/v1/chat/completions
- Local Ollama:
- API Token: Authentication token
- Ollama:
ollama(or any value) - LM Studio: Any value
- Remote APIs: Your actual API key
- Ollama:
- Model Name: The model identifier (e.g.,
llama3.2,gpt-4, etc.) - Temperature: Controls randomness (0.0 = deterministic, 2.0 = very random)
- Max Tokens: Maximum length of responses
- System Prompt: Instructions for how the AI should behave
-
Click "Test connection" to verify it works
-
Set as default if desired
- New Conversation: Click the "New Conversation" button in the sidebar
- Select Conversation: Click any conversation in the list to view it
- Delete Conversation: Click the × button next to a conversation
local-llm-opencloud/
├── src/
│ ├── components/
│ │ ├── ChatWindow.vue # Main chat interface
│ │ └── ConfigSettings.vue # Configuration management
│ ├── services/
│ │ └── api.ts # API service & local storage
│ ├── views/
│ │ └── Chat.vue # Main chat view with sidebar
│ └── index.ts # App entry point
├── public/
│ └── manifest.json # App manifest
├── package.json
├── vite.config.ts
└── README.md
This app is a client-side Vue 3 application that:
- Stores all data in browser localStorage (conversations, messages, configs)
- Communicates directly with LLM servers from the browser
- Supports both local and remote LLM endpoints
Flow:
Browser (OpenCloud App) → LLM Server (Ollama/LM Studio/Remote API)
- All conversation data is stored locally in your browser
- No data is sent to external servers (except your configured LLM endpoint)
- API tokens are stored in browser localStorage
- All communication with the LLM happens directly from your browser
Error: Failed to fetch or API connection errors
Solutions:
- Make sure your LLM server is running
- Verify the API URL is correct and accessible from your browser
- For local servers (localhost), ensure they're running on the correct port:
# Check if Ollama is running curl http://localhost:11434/v1/models # Check if LM Studio is running curl http://localhost:1234/v1/models
- For remote APIs, verify your API key is correct
- Check browser console for detailed error messages
If you encounter CORS errors when connecting to local LLM servers:
For Ollama:
# Set the OLLAMA_ORIGINS environment variable before starting Ollama
# Allow all origins (recommended for local use):
export OLLAMA_ORIGINS="*"
# Then start or restart Ollama
ollama serveOn Windows (PowerShell):
$env:OLLAMA_ORIGINS="*"
ollama serveFor LM Studio:
- Open LM Studio
- Go to Settings → Server
- Enable "Enable CORS" checkbox
- Restart the server
For vLLM:
# Add the --allowed-origins flag when starting vLLM
vllm serve <model> --allowed-origins "*"For Remote APIs: Remote HTTPS endpoints usually have CORS already configured. If you encounter CORS errors with a remote endpoint, contact the API provider to enable CORS for your OpenCloud domain.
If your OpenCloud instance has strict CSP that blocks connections to your LLM endpoints:
This app requires CSP configuration to work. See the Configure OpenCloud CSP section above for detailed instructions on how to configure your OpenCloud instance.
If you've already configured CSP and are still having issues:
- Verify CSP configuration is loaded: Check your OpenCloud logs to ensure the CSP configuration file is being read
- Restart OpenCloud: After modifying CSP configuration, restart your OpenCloud instance
- Check browser console: Look for CSP violation errors that will tell you which directive needs to be updated
- Use HTTPS endpoints: Remote HTTPS endpoints are more likely to be allowed by restrictive CSP policies
This project is licensed under the GPL-2.0 License - see the LICENSE file for details.
For issues and questions:
- 🐛 Report bugs
- 💡 Request features
- 📖 Review LLM server documentation (Ollama, LM Studio)
Contributions are welcome! Please:
- Fork the repository
- Create a feature branch (
git checkout -b feature/amazing-feature) - Make your changes
- Test thoroughly in a development environment
- Commit your changes (
git commit -m 'Add amazing feature') - Push to the branch (
git push origin feature/amazing-feature) - Submit a pull request
- Ollama team for making local LLMs accessible and easy to use
- OpenCloud team for the excellent platform and development framework
- LM Studio for providing a user-friendly local inference platform
- The open-source LLM community for advancing local AI
If you find this useful or have questions:
- ⭐ Star the repo if you find it useful!
- 🐛 Report bugs
- 💡 Request features
- 🤝 Contribute improvements via pull requests
If you like this project, support further development:
- 🧑💻 Markus Begerow
- 💾 GitHub
Privacy Notice: This app operates entirely in your browser by default. No data is sent to external servers unless you explicitly configure a remote API endpoint. All conversation data is stored locally in your browser's localStorage and never leaves your device.