A production-level IT Support Chatbot built using Retrieval-Augmented Generation (RAG), OpenAI’s GPT-3.5, and LangChain. This project enables intelligent, context-aware question answering based on internal knowledge documents, providing fast and accurate tech support solutions.
- 🗂️ Contextual QA via RAG: Combines LLM reasoning with document retrieval
- 🔍 Semantic Search: Finds relevant IT support docs using vector similarity
- 🧑💻 Interactive Chat Interface: Real-time conversations powered by GPT-3.5
- 🏗️ Modular LangChain Architecture: Easily extendible with new tools or APIs
- 🛡️ Private Knowledge Base: Keeps internal data secure and queryable
- LLM Backend: OpenAI GPT-3.5
- Framework: LangChain
- Vector Store: FAISS / ChromaDB
- Embedding Model: OpenAI or HuggingFace Transformers
- Frontend: Streamlit (optional)
- Utilities: dotenv, tiktoken, PyMuPDF, Python
pip install -r requirements.txtCreate a .env file in the root directory with the following:
OPENAI_API_KEY=your_openai_keypython ingest.pyThis step processes documents in /data and builds the vector index.
python langchain_app.py