This solution provides an example of how to process your own documents and then use Microsoft Foundry and Microsoft Agent Framework to ask question specific to that document.
NOTE: The console app is also provided to demonstrate how to use the AI Foundry and the Agent Framework to ask questions of an AI Agent.
This solution consists of:
- A console app to easily run and test locally with the following commands:
- process - to process a file through document intelligence, then create embeddings and add the file to Azure AI Search. Automatically generates a summary after indexing.
- doc - to set the active document you want to ask questions about
- ask - to ask questions (routes through an intelligent Router Agent that delegates to the appropriate specialist agent)
- ask-all - to ask questions across all indexed documents without needing to set an active document
- summarize - to get a summary of the active document
- search-and-summarize - sequential agent workflow that searches across all documents then summarizes the findings (CrossDocument → Summarizer pipeline)
This solution uses the Microsoft Agent Framework to orchestrate multiple specialized AI agents with true agent-to-agent communication:
- Router Agent - Analyzes user intent and delegates to the appropriate specialist agent (agent-to-agent via tool-calling)
- AskQuestions Agent - Answers questions about a specific document
- CrossDocument Agent - Searches across all indexed documents to answer questions
- Summarizer Agent - Generates concise summaries of document content
-
Router → Specialist delegation (
askcommand): The Router Agent's tools invoke the actual specialistAIAgentinstances, collecting their full responses. The Router then passes the specialist's answer back to the user. -
Sequential pipeline (
search-and-summarizecommand): UsesAgentWorkflowBuilder.BuildSequentialto chain the CrossDocument agent → Summarizer agent. The CrossDocument agent searches across all documents first, then its output flows as input to the Summarizer agent which condenses the findings.
User types at "dq> " prompt
│
▼
┌───────────┐
│ Worker │ Parses command via System.CommandLine
└────┬──────┘
│
┌────┴──────────────────────────────────────────────────────┐
│ │ │ │ │
ask ask-all summarize search-and-summarize process
│ │ │ │ │
▼ ▼ ▼ ▼ ▼
Router CrossDoc Summarizer ┌─── Sequential ────┐ DocIntel
Agent Agent Agent │ CrossDoc agent │ → then
│ │ ▼ (output) │ Summarizer
│ (tool-calling) │ Summarizer agent │
│ invokes actual agents: └───────────────────┘
├──→ AskQuestions Agent
├──→ CrossDocument Agent
└──→ Summarizer Agent
- The deployment script can create a new Azure OpenAI Service for you however if you want to reuse an existing one, it will need to be in the same subscription where you are going to deploy your solution and retrieve its
Endpointand aKey. - The PowerShell deployment script defaults to
gpt-5-miniandtext-embedding-3-largemodels with a deployment name matching the model name. If you have something different in your Azure OpenAI instance, you will want to pass in those values to the PowerShell command line deployed to your Azure OpenAI instance each with a deployment name matching the model name. Be aware, that using a different GPT model may result in max token violations with the example below.
Deployment is automated using PowerShell, the Azure CLI and the Azure Developer CLI. These can be easily installed on a Windows machine using winget:
winget install --id "Microsoft.AzureCLI" --silent --accept-package-agreements --accept-source-agreements
winget install --id "Microsoft.Azd" --silent --accept-package-agreements --accept-source-agreementsNOTE: Since you will be deploying a new Azure OpenAI instance, be aware there are location limitations base on model. Please set your location value accordingly: Region Availability
Also, depending on your availble Azure OpenAI model quota, you may get a capacity related deployment error. If you do, you will need to modify the capacity value for the appropriate model found in the infra/azureopenai.bicep file
# Login to the Azure Developer CLI
azd auth login
#if you have access to multiple tenants, you may want to specify the tenant id
azd auth login --tenant-id "<tenant guid>"
# provision the resources
azd up
#follow the prompts for the parameter values...If successful, this process will create:
- Storage account with two blob containers (
rawfor uploaded documents andextractedfor processed output) - A Microsoft Foundry resource and project, with a
gpt-5-miniandtext_embedding_3_largedeployments and a system assigned managed identiy- Role assigment for Cognitive Services identity for read access to
rawcontainer and write access toextractedcontainer
- Role assigment for Cognitive Services identity for read access to
- Azure Cognitive Search account
- Azure Document Intelligence Account
- Azure Application Insights resource automatically connected to the Microsoft Foundry project for telemetry and monitoring
Along with the Azure deployment, the azd command will configure the local.settings.json file for the console app and the local funciton. To run the console app:
dotnet run --project ./DocumentQuestionsConsole/DocumentQuestionsConsole.csproj-
If this is your first time running the app or the Functions, you will not have any documents processed and you will be prompted to upload a document with the
process -
Upload a document using the
processcommand -
Set the current document using the
doccommand -
Start ask questions!
Try uploading your own documents and start asking question







