This project implements a multi-agent study assistant using Google Agent Development Kit (ADK) and LM Studio–hosted LLMs (e.g. Qwen 2.5).
The system decomposes a user’s study request into multiple sequential stages:
- Planning & Research
- Content Generation
- Quiz Generation
- Review & Quality Assurance
Agents are orchestrated using SequentialAgent, with explicit context passing between agents to ensure deterministic and web-safe execution.
User Request
↓
Planner & Research Agent
↓
Content Creation Agent
↓
Quiz Generation Agent
↓
Reviewer Agent
↓
Final Output
Each agent:
- Has a single responsibility
- Consumes outputs from previous agents
- Produces a structured artifact for the next stage
-
The system does not rely on
root_input -
All context is passed explicitly via agent outputs
-
This guarantees compatibility with:
adk web- CLI execution
- API-based execution
- Uses
SequentialAgent - Ensures predictable ordering and reproducibility
- Avoids race conditions common in parallel agent graphs
-
Uses
LiteLlmfor compatibility with:- LM Studio
- Local OpenAI-compatible servers
-
Avoids Gemini-only assumptions in agent logic
Purpose
- Breaks down the study task
- Performs research using available tools
- Produces a structured study plan
Output
research_output
Purpose
- Converts the research plan into comprehensive study material
- Organizes content into logical sections
Input
planner_research_agent.research_output
Output
content_output
Purpose
- Generates assessment questions
- Covers key concepts from the study material
Input
content_agent.content_output
Output
quiz_output
Purpose
- Evaluates coverage and completeness
- Identifies gaps and suggests improvements
Inputs
- Research plan
- Study content
- Quiz output
Output
review_output
- Python 3.10+
- Google ADK
- LM Studio
- LiteLLM
- Qwen 2.5 (recommended)
- Python 3.10 or newer
- LM Studio running locally
- An OpenAI-compatible inference server exposed by LM Studio
Create a .env file or set environment variables:
MODEL_NAME=qwen2.5:7b-instruct
BASE_URL=http://localhost:1234/v1
API_KEY=lm-studio
API_KEYcan be any non-empty value for LM Studio.
adk webThen open the localhost page:
http://127.0.0.1:8000/
Select the app and submit a study request.
-
GoogleSearchAgentToolis Gemini-oriented -
Tool invocation may be inconsistent with non-Gemini models
-
For production use, replace with:
- Local search tools
- Tavily / SerpAPI
- Vector database retrieval
- Replace Google search with deterministic
FunctionTool - Add JSON or schema-based outputs
- Parallelize research sub-tasks
- Introduce caching for repeated study topics
- Add formal input/output contracts per agent
Stable for development and experimentation Production hardening recommended for tooling and output schemas
MIT License (or your preferred license)