Skip to content

muneeb-rashid-cyan/LangGraph-Customer-Support-Agent

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

2 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

LangGraph Customer Support Agent

A production-ready multi-agent customer support system built with LangGraph. Demonstrates advanced patterns not covered in the SQL Agent project.

Graph Architecture

Graph Architecture

START
  │
[query_planner]     ← structured output: category, plan, priority, requires_escalation
  │
[supervisor]        ← structured output: routes to specialist or respond (loops back)
  ├── [sql_agent]        ← lookup orders, customers, products (SQLite)
  ├── [faq_agent]        ← search FAQ/policy knowledge base (JSON)
  └── [escalation_agent] ← create refunds, log complaints (write operations)
       │
  (supervisor → human_review when escalation needs approval)
       │
[human_review]      ← interrupt()/Command(resume=...) for refund approval
  │
[response_writer]   ← final customer-facing response
  │
END

New Concepts Demonstrated (vs SQL Agent)

Concept Implementation
AsyncSqliteSaver async with AsyncSqliteSaver.from_conn_string(...) in main.py — checkpoints survive restarts
Query Planner Dedicated query_planner node with structured output before supervisor runs
Async throughout All nodes are async def, use await llm.ainvoke(), streamed with async for
Multi-agent supervisor One supervisor LLM with structured output routes between 3 specialist agents
Specialist agents Each has its own LLM instance, bound tools, and ToolNode
Structured output PlannerOutput and RouteDecision Pydantic models prevent routing errors

Project Structure

LangGraph-Customer-Support-Agent/
├── .env / .env.example        # API keys
├── .gitignore
├── README.md
├── requirements.txt
├── setup.py                   # DB setup + graph_diagram.png generation
├── main.py                    # Async CLI: asyncio.run(main()), astream, human review
├── data/
│   └── faq.json               # 18 FAQ entries across 6 categories
└── agent/
    ├── state.py               # SupportState TypedDict
    ├── tools/
    │   ├── sql_tools.py       # lookup_order, lookup_customer, check_stock
    │   ├── faq_tools.py       # search_faq
    │   └── escalation_tools.py # create_refund, log_complaint
    ├── nodes.py               # All async node functions + routing logic
    └── graph.py               # build_graph(checkpointer) — graph wiring

Setup

1. Create virtual environment

python -m venv venv
source venv/bin/activate   # Windows: venv\Scripts\activate
pip install -r requirements.txt

2. Configure environment

cp .env.example .env
# Edit .env and add your OPENAI_API_KEY

3. Initialize database and generate diagram

python setup.py

4. Run the agent

python main.py

Sample Conversations

Order lookup:

You: Where is my order ORD-0002?
→ planner: order_lookup, priority=medium
→ supervisor: sql_agent
→ sql_agent: calls lookup_order("ORD-0002")
→ supervisor: respond
→ response_writer: "Your order ORD-0002 for the Ergonomic Office Chair was shipped on..."

FAQ question:

You: What is your return policy?
→ planner: return_refund, priority=low
→ supervisor: faq_agent
→ faq_agent: calls search_faq("return policy")
→ supervisor: respond
→ response_writer: "We accept returns within 30 days..."

Refund request (with human approval):

You: I need a refund for order ORD-0001, my headphones are broken
→ planner: return_refund, requires_escalation=True, priority=high
→ supervisor: sql_agent (lookup order details)
→ supervisor: faq_agent (check return policy)
→ supervisor: human_review
⏸ HUMAN REVIEW REQUIRED — approve? (y/n)
→ [approved] escalation_agent: calls create_refund("ORD-0001", "defective product", 149.99)
→ response_writer: "Your refund REF-XXXXXX has been created..."

Key Implementation Details

AsyncSqliteSaver Pattern

# main.py
async with AsyncSqliteSaver.from_conn_string("checkpoints.db") as checkpointer:
    graph = build_graph(checkpointer)
    async for event in graph.astream(input_state, config, stream_mode="values"):
        ...

Checkpoints are stored in checkpoints.db. The graph state survives process restarts — resume any conversation by reusing the same thread_id.

Supervisor Structured Output

class RouteDecision(BaseModel):
    next: Literal["sql_agent", "faq_agent", "escalation_agent", "human_review", "respond"]
    reasoning: str

_supervisor_llm = ChatOpenAI(...).with_structured_output(RouteDecision)

Human Review (interrupt/resume)

# In human_review_node:
decision = interrupt({"summary": summary})  # pauses graph
approved = decision.get("approved", False)
return Command(update={"approved": approved}, goto="escalation_agent")

# In main.py (operator side):
await graph.astream(Command(resume={"approved": True}), config, ...)

Database Schema

Table Purpose
customers Customer profiles with loyalty tier
products Product catalog with stock levels
orders Order records with status tracking
order_items Line items per order
support_tickets Complaint/escalation audit trail
refund_requests Refund audit trail

About

Production-ready multi-agent customer support system built with LangGraph. Features supervisor routing, human-in-the-loop approval, async checkpointing, and 3 specialist agents (SQL, FAQ, escalation).

Topics

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors

Languages