Skip to content

vela-paul/LLMAssignment

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

9 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

LLMAssignment

Run backend (FastAPI)

  1. Ensure dependencies are installed:
py -m pip install -r .\requirements.txt
  1. Start the API (default on http://localhost:8000):
py -m uvicorn api:app --reload --host 0.0.0.0 --port 8000

Health check: open http://localhost:8000/health

Endpoints:

  • GET /summaries
  • GET /summary/{title}
  • POST /recommend { query }
  • POST /chat { message }

Run frontend (Expo)

cd .\frontend
npm install
npm run start

Notes:

  • Android emulator uses the host at 10.0.2.2, configured in frontend/config.js.
  • Web and iOS simulator use http://localhost:8000.

Smart Librarian – a small RAG chatbot that recommends books and provides detailed summaries.

Setup

  1. Install dependencies:
    pip install -r requirements.txt
  2. Set the OPENAI_API_KEY environment variable.

Usage

Run the CLI chatbot:

python smart_librarian.py

Type your interests (ex: Vreau o carte despre prietenie și magie) and the bot will recommend a book and show a full summary.

React Native Frontend

A minimal React Native app lives in the frontend folder with two screens:

  • Chat – interact with the chatbot.
  • History – review the last conversation stored locally.

Run it with Expo:

cd frontend
npm install
npm start

About

A RAG system, A smar librarian powered by a vector store

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors