Skip to content

SufficientDaikon/hugbrowse

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

51 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

HugBrowse Logo

HugBrowse

Your Local-First AI Platform — Browse, Download & Run Hugging Face Models

Status: Paused Version License Platform Tauri v2 React 19


⚠️ Project Paused

I've paused development on HugBrowse. While building this, I discovered LM Studio which already does pretty much everything I was trying to build here — model browsing, downloading, local inference, and more — and it does it really well. Rather than reinventing the wheel, I'd recommend checking out LM Studio if you're looking for a local AI platform. This repo will stay up for reference, but don't expect active development for the foreseeable future.



Personal Note

This app was me just testing my tools Tbh, my friend one day was like "is there an app that does this" i said to him, idk, i'll make you one, then i found LM studio, soooo... yeah xD anyway, i got bored, so here's the code, do with it whatever you want, i had this idea as well, if you want you can fork this and continue making it, you could make it so people can host servers through the app to run the LLM models on them, or maybe run training on kaggle through the app or google collab or whatever, just make it an interface between cloud LLM servcies and local, if you get what i mean, i can't be asked explaining, goodbye.


HugBrowse is a local-first desktop application for discovering, downloading, and running AI models from Hugging Face. It auto-detects your hardware capabilities, manages model downloads with integrity verification, and provides a full chat interface powered by local inference with GPU acceleration (CUDA, Metal, Vulkan). When your hardware isn't enough, seamlessly offload inference to the cloud — HuggingFace Inference Endpoints, custom API servers, or your own VPS — all managed from one interface. Automatic updates keep you on the latest version without manual downloads.

Screenshots

Home — Model Browser Recommended Models

Resource Monitor Settings

✨ Features

Category Highlights
Model Browser Search & filter Hugging Face models by pipeline, library, and sort order
Hardware Detection Auto-detects your tier (Entry / Mid / High / Ultra) and scores model compatibility
Download Manager Pause, resume, cancel downloads with SHA-256 integrity verification
Local Inference Run models via llama-server sidecar with GPU auto-detection (CUDA / Metal / Vulkan)
☁️ Cloud Offload Offload inference to HuggingFace Endpoints, custom API servers, or your own VPS
Chat Interface Markdown rendering, streaming responses, conversation history
RAG Support Attach PDF, DOCX, and text documents for retrieval-augmented generation
Resource Monitor Real-time CPU, RAM, GPU, VRAM, and disk usage tracking
Marketplace Community-driven marketplace for plugins, extensions, and custom models
Auto-Updater Seamless in-app updates — no need to re-download installers
Deep Links hugbrowse:// protocol for one-click model imports
System Tray Quick actions from the system tray
Privacy-First Local-first by default — cloud offload is optional and explicit

🚀 Quick Start

  1. Download the latest installer from GitHub Releases
  2. Run the installer (.msi or .exe for Windows)
  3. Launch HugBrowse and complete the onboarding wizard
  4. Browse models, download one that fits your hardware, and start chatting!

💡 HugBrowse auto-updates itself — once installed, you'll always have the latest version.

🛠️ Development Setup

Prerequisites

Clone & Install

git clone https://github.com/SufficientDaikon/hugbrowse.git
cd hugbrowse
npm install

Run in Development

npm run tauri:dev

This starts the Vite dev server with hot reload and launches the Tauri window.

Build for Production

npm run tauri:build

Installers are output to src-tauri/target/release/bundle/ (MSI and NSIS).

For detailed build instructions and troubleshooting, see BUILDING.md.

🏗️ Architecture

HugBrowse is built on Tauri v2, combining a lightweight Rust backend with a modern React frontend.

┌─────────────────────────────────────────────────┐
│                   Tauri Shell                    │
│  ┌───────────────────┐  ┌────────────────────┐  │
│  │   React Frontend   │  │   Rust Backend     │  │
│  │                     │  │                    │  │
│  │  • React 19         │  │  • Tauri 2.10      │  │
│  │  • TypeScript 5.9   │  │  • Sysinfo         │  │
│  │  • TanStack Query   │  │  • Reqwest         │  │
│  │  • Zustand          │  │  • SHA-256 verify   │  │
│  │  • Tailwind CSS 4   │  │  • Tokio async     │  │
│  │  • React Router 7   │  │  • Plugin system   │  │
│  └────────┬────────────┘  └──────┬─────┬──────┘  │
│           │    IPC Commands      │     │         │
│           └──────────────────────┘     │         │
│                                        │         │
│  ┌──────────────────────┐  ┌──────────┴────────┐│
│  │  llama-server        │  │ Cloud Offload      ││
│  │  (local sidecar)     │  │ Proxy              ││
│  │  CUDA/Metal/Vulkan   │  │ HF Endpoints       ││
│  └──────────────────────┘  │ Custom URLs        ││
│                             │ VPS / Azure / etc  ││
│                             └───────────────────┘│
└─────────────────────────────────────────────────┘

Frontend (src/) — React SPA bundled by Vite. Pages include the model browser, chat, resource monitor, marketplace, and settings. State is managed with Zustand stores and server state with TanStack Query.

Backend (src-tauri/src/) — Rust process that handles file I/O, model downloads with streaming and SHA-256 verification, system hardware detection, process management for the inference sidecar, and cloud offload proxy routing.

Sidecarllama-server runs as a child process for local model inference, automatically selecting the best GPU backend available on your system.

Cloud Offload — When local hardware isn't sufficient, inference can be routed through the Rust backend to remote endpoints: HuggingFace Inference Endpoints (one-click deploy), custom OpenAI-compatible API servers, or self-hosted VPS instances. All requests proxy through the backend for credential injection and CORS handling.

🤝 Contributing

Contributions are welcome! Here's how to get started:

  1. Fork the repository
  2. Create a feature branch (git checkout -b feature/my-feature)
  3. Commit your changes (git commit -m 'Add my feature')
  4. Push to the branch (git push origin feature/my-feature)
  5. Open a Pull Request

Please make sure your code passes linting (npm run lint) and builds successfully before submitting.

📄 License

This project is licensed under the MIT License.


Made with ❤️ for the local AI community

About

Local AI Platform — Browse, download, and run Hugging Face models on your machine. Privacy-first. Tauri v2 + React + Rust.

Topics

Resources

License

Stars

Watchers

Forks

Packages

 
 
 

Contributors