This project implements a small HTTP API that acts as a natural language command router, directing user queries to the appropriate specialized "tool" for processing. It demonstrates a simplified AI agent architecture using FastAPI for the backend and LangGraph for intelligent tool selection.
- Natural Language Processing: Accepts user commands in plain English.
- Intelligent Tool Routing: Automatically identifies if a query requires a Math tool, a Weather tool, or a general Language Model (LLM).
- FastAPI Backend: Provides a robust and high-performance API endpoint.
- LangGraph Integration: Utilizes a stateful graph for dynamic tool selection and conversational flow.
- Structured Responses: Returns consistent JSON output with the original query, the tool used, and the result.
- Containerized Deployment: Ready for deployment using Docker.
Follow these steps to set up and run the project locally or within a Docker container.
To run this project, you will need:
- Python 3.11+ (for local development)
- Docker Desktop (for containerized deployment on Windows, macOS, or Linux)
- pip (Python package installer)
API Keys:
- Google Gemini API Key: For the Language Model (LLM) functionality, refer to Google AI Studio.
- OpenWeatherMap API Key: For the Weather Tool, refer to Open Weather Map API.
The project is organized for clarity and maintainability:
├── .env # Environment variables (API keys)
├── Dockerfile # Instructions for building the Docker image
├── requirements.txt # Python dependencies
└── source/ # All application code
├── __init__.py
├── agent.py # LangGraph agent definition and logic
├── main.py # FastAPI application and API endpoint
└── tools/ # Specialized tools
├── __init__.py
├── math_tool.py # Performs mathematical calculations
└── weather_tool.py # Fetches weather data
First, clone this repository to your local machine:
git clone https://github.com/your-username/ai-agent-router.git # Replace with your actual repo URLcd ai-agent-router
Create a file named .env in the root directory of your project (the same directory as Dockerfile and requirements.txt). Populate it with your API keys:
GEMINI_API_KEY="YOUR_GOOGLE_GEMINI_API_KEY"
OPENWEATHERMAP_API_KEY="YOUR_OPENWEATHERMAP_API_KEY"Replace "YOUR_GOOGLE_GEMINI_API_KEY" and "YOUR_OPENWEATHERMAP_API_KEY" with your actual API keys.
This is the recommended way to run the application, ensuring a consistent environment.
Before building the Docker Image, create the
.envfile and insert the API keys first on the root folder.
The Dockerfile defines how your application is packaged into a Docker image.
To build the Docker image, navigate to the root directory of your project in your terminal and run the following command.
docker build -t ai-agent-router .-
--build-arg: Passes environment variables securely during the build process. -
-t ai-agent-router: Tags the image with the name ai-agent-router. -
.: Specifies that the build context (files to be included) is the current directory.
Once the image is built, you can run your application in a Docker container:
docker run -d \
-p 8000:8000 \
--name my-ai-agent \
ai-agent-router-
-d: Runs the container in detached mode (in the background). -
-
p 8000:8000: Maps port 8000 on your host machine to port 8000 inside the container. -
--name my-ai-agent: Assigns a human-readable name to your running container. -
ai-agent-router: The name of the Docker image to use.
Your API will now be accessible at http://localhost:8000.
If you prefer to run the application directly on your machine without Docker for debugging purposes, follow these steps:
First, create a requirements.txt file (if you haven't already) by running:
pip freeze > requirements.txt
Then, install all required Python packages:
pip install -r requirements.txt
Navigate to your project's root directory in your terminal and run the FastAPI application:
uvicorn source.main:app --reload --host 0.0.0.0 --port 8000
-
--reload: Automatically reloads the server on code changes (useful for development). -
--host 0.0.0.0: Makes the server accessible externally (e.g., within your local network). -
--port 8000: Specifies the port for the server.
Your API will be accessible at http://localhost:8000.
You can test the API using tools like Postman or Insomnia (when the server is running).
POST Request to /query
- URL:
http://localhost:8000/query - Method:
POST - Header:
Content-Type: application/json - Body: Raw JSON with a
queryfield.
These are actual result from the agent, without any additional changes except mentioned in the setup process.
-
Math Tool:
- Input Body:
{ "query": "What is 42 * 7?" }- Output:
{ "query": "What is 42 * 7?", "tool_used": "math", "result": "42 * 7 = 294" } -
Weather Tool:
- Input Body:
{ "query": "What's the weather like today in Paris?" }- Output:
{ "query": "What's the weather like today in Paris?", "tool_used": "weather", "result": "It's 28.13°C and overcast clouds in Paris." } -
LLM Tool (General Question):
- Input Body:
{ "query": "Who is the president of France?" }- Output:
{ "query": "Who is the president of France?", "tool_used": "llm", "result": "I am sorry, I cannot answer this question. I do not have access to real-time information, including current political figures." }