Purpose: This repository implements Jobseekers, a job search conversational agent that analyzes resume information and user queries to provide personalized job recommendations and even apply to roles on the user’s behalf.
It supports multiple agent frameworks — LangChain, LangGraph, and CrewAI — and follows a modular architecture that separates concerns across API, agent orchestration, tools, and use-cases.
- Project overview
- Quickstart (run locally)
- Environment & configuration
- API endpoints (examples)
- High-level architecture (ASCII diagram)
- Component-by-component breakdown
- JobHuntings use-case: flow and templates
- Models, validators & tools
- How the agent is constructed (agent factory / selectors)
- Logging, guardrails & memory
- Tests, debugging & troubleshooting
- Deployment & production notes
- Extensions, improvements & TODOs
This project provides a modular agent platform focused on job helping buddy. The major goals:
- Accept a user prompt + schema JSON payloads.
- Validate model outputs with Pydantic models (SchemaModel, GraphDataModel, QueryConfigModel).
- Support multiple underlying agent runtimes (LangChain, LangGraph, CrewAI).
- Provide pluggable tools and memory backends.
Key strengths:
- Clear separation of concerns (API ↔ usecases ↔ agent wrapper ↔ tools ↔ LLM)
- Pydantic-based validation for deterministic outputs
- Flexible agent factory to plug different frameworks
Prerequisites
- Python 3.11+
- An OpenAI API key (or other model provider depending on your AGENT_FRAMEWORK)
Install
# create conda env (recommended)
conda create -n genric-agents python=3.11 -y
conda activate genric-agents
# install dependencies
pip install -r requirements.txt
Set environment variables (example):
export OPENAI_API_KEY="sk-..."
export AGENT_FRAMEWORK=langgraph # or langchain / crewai
export OPENAI_MODEL=gpt-4o-miniRun the API
# option A: run directly
python main.py
# option B: uvicorn
uvicorn main:app --reload --port 8025The app will be available at http://localhost:8025.
The project loads environment variables from resources/config.env via resources/constant.load_environs().
Important variables:
AGENT_FRAMEWORK— one oflangchain,langgraph,crewai. Determines which agent implementation is used.OPENAI_API_KEY— required for OpenAI-based LLM usage.OPENAI_MODEL— model name (e.g.gpt-4o-mini).MULTI_NODE_AGENT— (project-specific) toggle for multi-node behavior.
Agent config files are in config/agents/ (e.g. analytics_agent_config.json). These files define agent identity, guardrails, topics and capabilities used by the agent factory.
POST /api/agent/chat — main entrypoint for session-based agent runs.
- Request model:
AgentQuery(fields:sessionId,prompt,payload)
Example curl request:
curl -X POST http://localhost:8025/api/agent/chat \
-H "Content-Type: application/json" \
-d '{
"sessionId": "session-123",
"prompt": "Show me monthly leads by source",
"payload": {
"details": {"table_name":"view","fields":[]},
"query": "monthly leads split by source"
}
}'POST /api/genai/run — directly invokes the JobHuntingsUseCase with arbitrary dict payload for quick testing.
Response shape (standardized by utilities.response.build_response):
{
"status": true,
"message": "",
"response": {
"Data": {...},
"AIMessage": "...",
"ToolMessage": "...",
"Reasoning": "..."
}
}+----------+ HTTP +----------------+ create/get +--------------------+
| Client | -----> | FastAPI (main) | ---------------> | Agent Manager / |
+----------+ +----------------+ | BaseAgent |
+--------------------+
|
| selects
v
+-------------------------------------------+
| Agent Selector (LangChain / LangGraph / |
| CrewAI) -> returns a framework-specific |
| agent wrapper (LangChainAgentWrapper etc.)|
+-------------------------------------------+
|
+----------------------+-----------------+
| |
LLM / Guardrails Tools & Memory
(create_llm via llm_factory) (tool_factory, memory_factory)
| |
v v
+---------------------------+ +-----------------------------+
| Use Case / Prompt Chains | <-------> | Tool implementations (,
| ()| | ) |
+---------------------------+ +-----------------------------+
|
v
+--------------------+
| Validators (pydantic)|
+--------------------+
|
v
+--------------------+
| API returns JSON |
+--------------------+
main.py
- FastAPI entry with
app.include_router(routes, prefix='/api'). Starts uvicorn.
api/routes.py
- Defines REST endpoints
/agent/chatand/genai/run. - Uses
services.agent_executor.run_agent_queryfor session-based calls.
services/agent_executor.py
- Thin shim that calls
agents_builder.agent_manager.get_agent(factory) andagent.run(...).
agents_builder/
agent_config_loader.py— loads JSON agent configs fromconfig/agents/.agent_manager.py— creates and caches agents per session (in-memoryagent_store). Wraps tools to match differing tool signatures across frameworks.base_agent.py— constructs the agent implementation usingagents_builder.selector.select_agent.tool_factory.py— adapts local Python functions into framework-specific tools (LangChainTool, CrewAIBaseTool, etc.).memory_factory.py— provides memory backends for each supported framework (LangChain conversation memory or CrewAI memory wrappers).selector/— contains per-framework constructors:langchain_selector.py,langgraph_selector.py,crewai_selector.py. Each wraps the LLM, memory and tools into a framework-specific agent and exposes a unifiedruninterface.
tools/
- Global tools that can be mounted onto agents:
various tools. These call into the JobsHunting generator or other helpers.
resources/
config.envdefault env variables- Logging helpers
resources/Logging/studio_logger.py
utilities/response.py
- Standardized response builder used across wrappers.
Core flow (inside JobsHuntingGenerator):
The use-case uses multiple templated prompts (in templates/) to keep each step focused and reproducible.
Pydantic models (key ones):
Validators ensure the LLM outputs strictly conform to the expected schema; they also strip markdown/code fences commonly emitted by LLMs.
Tools
fields_filter_tool— simple keyword partial-matching filter that returns candidate field definitions.
Flow when a request comes in:
services.agent_executor.run_agent_querycallsagents_builder.get_agent(seeagent_manager.py).- The agent manager: loads config (per-agent or default), selects tools from
tools.ALL_TOOLS, wraps them to framework-compatible signatures, then constructs aBaseAgent(config, tools)instance. BaseAgentdelegates toagents_builder.selector.select_agentwhich returns a framework-specific wrapper (LangChainAgentWrapper, LangGraphAgentWrapper or CrewAIAgentWrapper).- Wrapper initializes an LLM (via
llm.llm_factory.create_llm), memory (viamemory_factory) and constructs agent internals (chains, tools, guardrails, system prompt). - When
agent.run(prompt, payload)is called, the wrapper executes the agent flow and returns a standardized response usingutilities.response.build_response.
- Guardrails:
guardrails/sys_guards.pycontains a system-level instruction applied to agents to reduce hallucinations and enforce step-by-step reasoning. - Logging:
resources/Logging/studio_logger.pyreturns a simple Pythonlogginginstance per use-case. - Memory:
agents_builder/memory_factory.pyadapts either LangChain conversation memories or CrewAI memory classes. Currently the project ships with adapters to produce conversation buffers or long-term memory where available.
Common issues & fixes
OPENAI_API_KEYnot found: ensureresources/config.envis filled or set environment variables before starting.AGENT_FRAMEWORKmismatch: setAGENT_FRAMEWORKto one oflangchain,langgraph,crewai. If a framework is missing in your environment (e.g.crewainot installed), either install it or pick another framework.- Import errors (langchain/langgraph version mismatches): ensure versions in
requirements.txtare installed. LangChain/LangGraph have breaking changes across major versions — pin the working versions fromrequirements.txt. - LLM output formatting errors: validators may raise when the LLM returns non-JSON results. Inspect
templates/to tune instructions and add stricter output schema.
Debugging tips
- Enable debug logs via python logging configuration.
- Run
uvicorn main:app --reloadand call endpoints with simple payloads first. - Re-run a failing step manually: the class exposes
_generate_chain_runtype methods — wrap them to print raw LLM output to debug.
- Secrets: never store
OPENAI_API_KEYin repo. Use secret stores (AWS Secrets Manager, HashiCorp Vault) in production. - Scaling: the agent store is an in-memory dict. For multiple instances / horizontal scaling, move session storage to Redis or a persistent store and share agent state or only store immutable configs in DB.
- Rate limiting & concurrency: LLM calls are rate-limited and can be slow. Add request queuing, timeouts and circuit-breakers around LLM calls.
- Persistent memory: For long-term memory or multi-session context, integrate a vector DB (Weaviate, Pinecone, Chroma) and adapt
memory_factoryto use it. - Observability: add structured traces around LLM calls and tool calls (e.g. OpenTelemetry), and log LLM tokens usage for cost control.
Short-term
- Add unit tests for
validators._validate_modelandfields_filter_tool. - Harden prompt templates (provide examples and negative examples to reduce hallucination).
- Add request/response tracing (e.g. request ID headers).
Medium-term
- Add a vector store backed memory implementation and persistence across restarts.
- Add a dashboard showing recent sessions, requests and token usage.
- Add more sophisticated field matching (fuzzy match, name synonyms, embedding-based similarity).
Long-term / Research
- Multi-agent coordination: compose multiple agents (ETL-agent, Query-agent, Viz-agent) into pipelines.
- Offline deterministic generation: explore using function-calling APIs or structured output specs to guarantee correct JSON output.
main.py # FastAPI entry
api/routes.py # API endpoints
agents_builder/ # Agent factory, tool adapter, memory adapter
- agent_manager.py
- base_agent.py
- tool_factory.py
- memory_factory.py
- selector/ (langchain, langgraph, crewai)
usecases/agents/ # agents use-case, templates, validators
tools/ # application-level tools mounted into agents
resources/ # config.env + logging
utilities/response.py # standard response format
config/agents/analytics_agent_config.json # agent metadata and guardrails
