An intelligent multi-agent system that creates personalized technical interview preparation plans using AI. Built with LangChain, LangGraph, Anthropic Claude, and FastAPI.
Interview Prep AI uses a sophisticated four-agent pipeline to analyze your resume, research company-specific interview patterns, generate tailored practice questions, and create a customized study schedule - all powered by Claude AI.
Key Features:
- Resume & job description skill gap analysis
- Company-specific interview intelligence (LeetCode patterns, Glassdoor insights, Reddit tips)
- Personalized coding, behavioral, and system design questions
- AI-powered practice feedback with scoring
- Customized weekly study schedules with milestones
+------------------+
| Frontend UI |
| (Test Console) |
+--------+---------+
|
+--------v---------+
| FastAPI Layer |
| /api/v1/... |
+--------+---------+
|
+--------v---------+
| Orchestrator |
+--------+---------+
|
+----------------+-------------+-------------+----------------+
| | | |
+-------v------+ +-------v-------+ +--------v-------+ +-------v------+
| Pre-Processor| | Knowledge | | Question | | Planner |
| Agent | | Agent | | Generation | | Agent |
+--------------+ +---------------+ +----------------+ +--------------+
| - Resume | | - LeetCode | | - Coding Qs | | - Schedule |
| Parser | | - Glassdoor | | - Behavioral | | - Milestones |
| - JD Parser | | - Reddit | | - System Design| | - Resources |
| - Skill Gap | | - Company | | - Personalizer | | - Optimizer |
+--------------+ +---------------+ +----------------+ +--------------+
|
+--------v---------+
| Shared Memory |
+------------------+
- Python 3.11+
- Anthropic API Key (Get one here)
# Clone the repository
git clone https://github.com/Mayank-glitch-cpu/Interview-Prep-AI.git
cd Interview-Prep-AI
# Create virtual environment
python -m venv venv
# Activate virtual environment
# Windows:
.\venv\Scripts\activate
# Linux/Mac:
source venv/bin/activate
# Install dependencies
pip install -r requirements.txt
# Configure environment
cp .env.example .env
# Edit .env and add your ANTHROPIC_API_KEY# Start the development server (opens browser automatically)
python run_dev.py
# Or using uvicorn directly
uvicorn src.api.main:app --reload --host 0.0.0.0 --port 8000Access the application:
- Frontend UI: http://localhost:8000/static/index.html
- API Documentation: http://localhost:8000/docs
- Health Check: http://localhost:8000/api/v1/health
cd docker
cp .env.example .env
# Edit .env with your ANTHROPIC_API_KEY
docker-compose up -d- Open http://localhost:8000/static/index.html
- Enter company name, job description, and your resume
- Click "Create Plan" and wait for the pipeline to complete (~2-4 minutes)
- View your personalized preparation plan, questions, and study schedule
- Use Practice mode to answer questions and get AI feedback
Quick Test: Open browser console and run InterviewPrepAI.loadSampleData() to load sample data.
Start Interview Preparation Pipeline:
curl -X POST "http://localhost:8000/api/v1/interview/prepare" \
-H "Content-Type: application/json" \
-d '{
"company_name": "Google",
"job_description": "Your target job description...",
"candidate_resume": "Your resume text...",
"job_title": "Software Engineer",
"interview_type": "mixed",
"intensity": "moderate"
}'Evaluate Practice Answers:
curl -X POST "http://localhost:8000/api/v1/interview/evaluate" \
-H "Content-Type: application/json" \
-d '{
"question": "Implement a function to reverse a linked list",
"question_type": "coding",
"user_answer": "Your solution here...",
"time_taken_minutes": 15
}'InterviewPrep AI/
├── src/
│ ├── agents/ # Multi-agent pipeline
│ │ ├── preprocessor/ # Resume & JD parsing agent
│ │ │ ├── agent.py
│ │ │ └── tools/
│ │ │ ├── resume_parser.py
│ │ │ ├── jd_parser.py
│ │ │ └── skill_gap_analyzer.py
│ │ ├── knowledge/ # Company research agent
│ │ │ ├── agent.py
│ │ │ ├── tools/
│ │ │ │ ├── leetcode_scraper.py
│ │ │ │ ├── glassdoor_scraper.py
│ │ │ │ ├── reddit_scraper.py
│ │ │ │ └── company_researcher.py
│ │ │ └── mock_data/ # Mock data for development
│ │ ├── question_generation/ # Question generation agent
│ │ │ ├── agent.py
│ │ │ └── tools/
│ │ │ ├── coding_question_generator.py
│ │ │ ├── behavioral_question_generator.py
│ │ │ ├── system_design_generator.py
│ │ │ └── question_personalizer.py
│ │ └── planner/ # Planning agent
│ │ ├── agent.py
│ │ └── tools/
│ │ ├── schedule_generator.py
│ │ ├── milestone_creator.py
│ │ ├── resource_recommender.py
│ │ └── plan_optimizer.py
│ ├── api/
│ │ ├── main.py # FastAPI application
│ │ └── routers/
│ │ └── interview_prep.py # API endpoints
│ ├── core/
│ │ ├── base_agent.py # Abstract agent class
│ │ ├── orchestrator.py # Pipeline orchestration
│ │ ├── llm.py # LLM configuration
│ │ ├── memory.py # Shared memory system
│ │ └── settings.py # Configuration management
│ └── utils/
├── frontend/
│ ├── index.html # Test console UI
│ ├── app.js # Frontend logic
│ └── styles.css # Styling
├── docker/
│ ├── Dockerfile # Multi-stage Docker build
│ └── docker-compose.yml # Full stack deployment
├── tests/ # Test suite
├── logs/ # Pipeline execution logs
├── run_dev.py # Development server runner
├── requirements.txt # Python dependencies
└── .env.example # Environment template
| Variable | Description | Default |
|---|---|---|
ANTHROPIC_API_KEY |
Anthropic API key (required) | - |
CLAUDE_MODEL |
Claude model to use | claude-3-5-sonnet-20241022 |
CLAUDE_TEMPERATURE |
Response temperature | 0.7 |
CLAUDE_MAX_TOKENS |
Max tokens per request | 4096 |
USE_MOCK_SCRAPERS |
Use mock data for scrapers | true |
DEBUG |
Enable debug mode | false |
LOG_LEVEL |
Logging level | INFO |
For enhanced company research (optional):
REDDIT_CLIENT_ID/REDDIT_CLIENT_SECRET- Reddit API credentialsGITHUB_TOKEN- GitHub API token for tech stack analysis
Analyzes the candidate's resume and target job description to identify:
- Skills, experience, and education from resume
- Job requirements and qualifications
- Skill gaps and match percentage
- Overall readiness score
Gathers company-specific interview intelligence:
- LeetCode patterns and frequently asked problems
- Glassdoor interview experiences and tips
- Reddit community insights and advice
- Company culture and interview process details
Creates personalized practice questions:
- Coding Questions: DSA problems tailored to skill gaps
- Behavioral Questions: STAR-format questions for soft skills
- System Design: Architecture scenarios based on experience level
- Questions ranked by relevance and priority
Builds a customized preparation plan:
- Weekly study schedules
- Milestone-based goals with deadlines
- Learning resource recommendations
- Optimized plan based on available time
# Run all tests
pytest
# Run with coverage
pytest --cov=src
# Run specific test file
pytest tests/unit/test_preprocessor.pyFor development without external API access:
# In .env
USE_MOCK_SCRAPERS=trueThis uses pre-defined mock data in src/agents/knowledge/mock_data/.
Pipeline execution logs are saved to logs/ directory with detailed agent execution traces.
- LLM Framework: LangChain + LangGraph
- LLM Provider: Anthropic Claude
- Web Framework: FastAPI + Uvicorn
- Frontend: Vanilla JavaScript, HTML5, CSS3
- Async Runtime: Python asyncio
- Document Processing: PyPDF2, pdfplumber
- Web Scraping: BeautifulSoup4, PRAW
- Fork the repository
- Create a feature branch (
git checkout -b feature/amazing-feature) - Commit your changes (
git commit -m 'Add amazing feature') - Push to the branch (
git push origin feature/amazing-feature) - Open a Pull Request
This project is licensed under the MIT License - see the LICENSE file for details.