This project is a Corporate Knowledge Base platform developed to centralize and facilitate internal knowledge sharing within a company. It evolves beyond a standard CMS by integrating a Private AI Assistant, robust permission systems, content moderation, version control, and real-time interactions.
Unlike cloud-based solutions, this platform runs 100% On-Premise, ensuring that sensitive corporate data (technical documents, internal blogs) never leaves the local server, making it fully compliant with strict data privacy regulations (GDPR/KVKK).
v3.0.0 is the biggest milestone yet, transforming the application from a passive storage system into an active, intelligent assistant using Local LLMs and RAG Architecture.
- Local Intelligence: Powered by Microsoft Phi-3 (via Ollama) running locally. No API keys, no cloud costs, no data leaks.
- Hybrid Search Engine: Combines Vector Similarity (Nomic Embed Text) with Keyword Boosting. It understands both the meaning (Semantic) and specific terminology (Lexical) of your query.
- Multi-Source Indexing: The AI scans both Technical Documents and Blog Posts, finding the best answer from the entire corporate memory.
- Smart Context:
- Auto-Embedding: Content is vectorized immediately upon creation or update.
- Syntax Highlighting: Code blocks in AI responses are automatically formatted and colored (Prism.js).
- Source Attribution: Every answer includes a direct link to the source document with a confidence score (e.g., "Source: EF Core Guide - 85% Match").
- Fire-and-Forget Jobs: Heavy operations like PDF Generation no longer block the user interface. They are processed in the background, preventing browser timeouts.
- Dashboard: A dedicated monitoring panel (
/hangfire) for tracking background tasks.
- Live Updates: Users receive instant notifications for completed reports, new comments, or AI analysis results without refreshing the page.
- Snapshot System: Automatic version history for every edit.
- Time Travel & Restore: Admins can view differences between versions and revert a document to any previous state with a single click.
The system is built on a modular, scalable architecture designed for enterprise environments.
- Backend: ASP.NET Core 9.0 (MVC)
- Database: MS SQL Server (Entity Framework Core Code-First)
- AI & ML Stack (New):
- Orchestrator: Microsoft.Extensions.AI
- Inference Engine: Ollama (Local Server)
- LLM Model: Microsoft Phi-3 Mini (3.8B)
- Embedding Model: Nomic-Embed-Text-v1.5
- Vector Operations: System.Numerics.Tensors
- Real-Time: SignalR (WebSockets)
- Background Jobs: Hangfire (SQL Storage)
- Frontend: Bootstrap 5, jQuery
- Key Libraries:
- PuppeteerSharp: HTML-to-PDF conversion (Chromium headless).
- Markdig: High-performance Markdown processor.
- Prism.js: Advanced syntax highlighting for code blocks.
- Tagify: Lightweight tagging interface.
- EasyMDE: User-friendly Markdown editor.
- DataTables.net: Advanced table controls for admin panels.
- SweetAlert2: Professional, responsive popup replacements.
We are pausing feature development to focus on infrastructure and portability. The next major release aims to decouple the application from expensive enterprise hardware.
- 🐳 Containerization: Official
Dockerfileanddocker-composesupport for "One-Click Deploy". - 💾 Database Independence: Dropping the strict dependency on MS SQL Server. Adding support for PostgreSQL (Docker standard) and SQLite (Edge/IoT standard).
- 🍓 ARM64 Optimization: Optimizing the AI Inference stack to run efficiently on Raspberry Pi 5 (8GB) and other ARM-based edge devices.
- Goal: A fully functional, AI-powered corporate knowledge base that fits in your pocket and runs offline.
-
Prerequisites:
- .NET 9.0 SDK
- SQL Server (LocalDB or Express)
- Ollama installed.
-
Setup AI Models: The system is optimized for
phi3(performance) andnomic-embed-text(vectorization).ollama pull phi3 ollama pull nomic-embed-text
💡 Flexible Architecture: The system is model-agnostic. While
Phi-3is the default for efficiency, you can easily switch to larger models (e.g., Llama 3, Mistral, or Gemma) for enterprise-grade reasoning. Simply pull the desired model via Ollama and update the model name configuration inProgram.cs. -
Clone the Repository:
git clone https://github.com/umitkrkmz/CorporateKnowledgeBase.git cd CorporateKnowledgeBase -
Configuration (Crucial Step):
- Open
appsettings.jsonand update the ConnectionStrings section to match your local SQL Server instance. - (Optional) If using User Secrets for admin passwords, ensure you set them locally:
dotnet user-secrets set "AdminSettings:DefaultPassword" "YourSecurePassword123!"
- Open
-
Run the Application: Apply database migrations and start the server:
dotnet ef database update dotnet run
- Default Admin User:
admin@knowledgebase.com - Password: The one you set in Step 4 (or check
DbInitializer.csfor defaults).
- Default Admin User:
-
First Time Configuration (AI Indexing):
- Log in as an Admin or Developer.
- Navigate to the "Documents" page (
/Document/Index). - Click the yellow "Re-Index AI" button (visible only to authorized roles) to generate vectors for existing content.
Copyright (c) 2025 umitkrkmz Licensed under the MIT License.



