Official site for OmniBot - Run LLMs natively & privately in your browser
-
Updated
Jan 3, 2025 - TypeScript
Official site for OmniBot - Run LLMs natively & privately in your browser
Electron + Next.js desktop AI assistant that runs GGUF models locally using llama.cpp. Designed for offline use, portability, and zero-install deployment.
An Offline intelligent inventory management system that uses LLMs (Large Language Models) to process natural language queries and manage inventory data through MongoDB.
Run Ai models on your desktop without internet
Hikari - a lightweight, offline coding chatbot that helps you write and debug code on low-end PCs using open models like Phi-3 and CodeLlama. Built with FastAPI, Ollama, and a minimal React web UI.
Proyecto para el Hackathon de lablab.ai "Llama Impact Pan-LATAM Hackathon" utilizando tecnologías de IA de Meta.
Offline-first AI assistant designed for low-connectivity environments.
Fully offline-capable AI assistant for answering operational, admin, and troubleshooting questions for on-premises S3-compatible platforms.
STORM MIC: The Neural Interface with Buffer Protocol. 100% Offline, Air-Gapped, & Private. Speak, Edit, Deploy.
Add a description, image, and links to the offline-ai-solutions topic page so that developers can more easily learn about it.
To associate your repository with the offline-ai-solutions topic, visit your repo's landing page and select "manage topics."