If you're new to this repo, follow the steps below to get the multi-agent demo running.
- Python 3.10+
- An Azure AI Foundry project with a deployed
gpt-4.1model - Azure CLI authenticated (
az login) or another credential supported byDefaultAzureCredential
git clone https://github.com/blazekids4/foundry-analytics-patterns.git
cd foundry-analytics-patterns
pip install -r requirements.txtCreate a .env file in the project root (or export the variables directly):
# Required — your Azure AI Foundry project endpoint
AZURE_AI_PROJECT_ENDPOINT=https://<your-project>.services.ai.azure.com
# Optional — enables trace export to Azure Monitor / Application Insights
APPLICATION_INSIGHTS_CONNECTION_STRING=InstrumentationKey=...python foundry_multi_agent_tracing_patterns.pyThe script will:
- Initialize OpenTelemetry tracing (local + optional Azure Monitor export).
- Create (or reuse) four agents:
router_agent,stats_agent,matchup_agent,general_agent. - Run four demo conversation turns that exercise different intent routes and tool calls.
- Write a Markdown trace file to
output/traces/for offline review.
Tip: On the first run, agents are provisioned in your Foundry project and their IDs are cached in
.agent_config_multi.json. Subsequent runs reuse them automatically.
├── foundry_multi_agent_tracing_patterns.py # Main multi-agent orchestration script
├── foundry_multi_agent_e2e_tracing.py # End-to-end tracing configuration (drop-in initializer)
├── requirements.txt # Python dependencies
├── tracing/ # Reusable tracing package
├── output/ # Generated trace output
├── documentation/ # Guides, standards, and reference docs
└── foundry-dashboards/ # Dashboard screenshots & examples
A reusable Python package that provides the core OpenTelemetry plumbing shared across demos:
| File | Purpose |
|---|---|
setup.py |
init_tracing() — configures the TracerProvider, Azure Monitor exporter, and AIAgentsInstrumentor |
exporter.py |
ConsoleSpanExporter — writes every span to the console and an optional Markdown file for local review |
__init__.py |
Re-exports init_tracing and ConsoleSpanExporter for convenient imports |
Runtime-generated artifacts. The output/traces/ subdirectory contains Markdown files produced by each demo run (e.g., trace_multi_agent_20260305_205158.md). These files give a human-readable, offline view of every span, attribute, and event in a session. This folder is auto-created on first run.
In-depth guides and enterprise reference material:
| Path | Description |
|---|---|
TRACING_GUIDE.md |
Step-by-step walkthrough of the tracing setup and conventions used in this repo |
END_TO_END_TRACING.md |
Deep dive into the end-to-end tracing initializer (init_e2e_tracing) |
enterprise-standards/DEVS_ATTRIBUTE_REFERENCE.md |
Attribute naming reference for developers building on these patterns |
enterprise-standards/ORG_TRACING_GOVERNANCE.md |
Organizational governance guidelines for tracing in production |
fyi/SDK_UPGRADE.md |
Notes on SDK version upgrades and breaking changes |
Screenshots and a README showcasing the dashboards built on top of the traces exported to Azure Monitor. Includes examples of:
- Azure AI Foundry agent thread tracing views
- Application Insights analytics queries
- Azure Monitor / Grafana dashboards for resource usage and agent framework metrics
Browse foundry-dashboards/README.md for the full screenshot gallery.
This application implements a Router → Worker orchestration pattern using the Azure AI Foundry Agents SDK. A central router agent classifies user intent and delegates work to specialist worker agents.
| Agent | Role | Tools |
|---|---|---|
| router_agent | Classifies intent into stats_lookup, matchup_analysis, or general_question |
None |
| stats_agent | Retrieves team and player data | get_team_stats, get_player_info |
| matchup_agent | Analyzes head-to-head records between teams | get_head_to_head |
| general_agent | Answers general sports questions via plain LLM response | None |
- User submits a question.
- The router agent classifies intent (on its own isolated thread).
- The question is delegated to the matching worker agent on a shared thread.
- If the worker has tools, tool calls are executed locally and results submitted back.
- The worker's response is returned to the user.
All tracing follows the OpenTelemetry GenAI semantic conventions for multi-agent systems, initialized via init_e2e_tracing before any Azure client is created.
orchestration_session
├── provision_agents
│ ├── provision:router_agent
│ ├── provision:stats_agent
│ ├── provision:matchup_agent
│ └── provision:general_agent
└── conversation_turn_{N}
├── task_decomposition
│ └── intent_classification
│ └── route_intent:router_agent
│ └── agent_execution
├── delegate:{worker}_agent
│ └── agent_execution
│ └── tool_execution_batch
│ └── tool:{function_name}
├── context_tracking
└── evaluation (event)
| Convention | Where Applied | Purpose |
|---|---|---|
execute_task |
task_decomposition span |
Marks the task planning boundary |
invoke_agent |
route_intent:* and delegate:* spans |
Traces agent-to-agent interaction |
agent_planning |
intent_classification span |
Captures internal routing decisions |
agent.state.management |
context_tracking span |
Records conversation history and turn state |
tool.call.arguments / tool.call.results |
tool:{name} spans |
Captures tool inputs and outputs |
evaluation |
Event on conversation_turn span |
Scores response quality per turn |
gen_ai.user.feedback |
Event on conversation_turn span |
Records user feedback (rating + comment) |
session.id— propagated to all spans via OTel Baggage for full session correlation.gen_ai.agent.name/gen_ai.agent.id— identifies which agent owns a span.router.intent— the classified intent from the router agent.worker.latency_ms— wall-clock delegation time per worker call.tool.status— success/failure outcome of each tool execution.
- Console / Markdown — every span is written to a local markdown file in
output/traces/for offline review. - Application Insights — when
APPLICATION_INSIGHTS_CONNECTION_STRINGis set, traces are exported to Azure Monitor for production observability.