This repository provides a hands-on introduction to Microsoft's Semantic Kernel, a lightweight SDK designed to orchestrate AI services and traditional code. This guide will help you set up, run, and understand the basics of Semantic Kernel through a practical example.
Semantic Kernel is an open-source SDK that integrates Large Language Models (LLMs) with conventional programming languages. At its core, the Semantic Kernel:
- Acts as a Dependency Injection container managing services and plugins for your AI application
- Provides a consistent interface to work with different AI services (OpenAI, Azure OpenAI, etc.)
- Enables you to combine AI capabilities with traditional code through plugins
- Supports function calling to let AI models interact with your code
- Prerequisites
- Project Setup
- Environment Configuration
- Understanding the Code
- Running the Application
- Learning Activities
- Troubleshooting
- .NET SDK 9.0 or later
- Azure account with access to Azure OpenAI Service
- Basic understanding of C# programming
-
Fork this repository on GitHub (click the "Fork" button at the top right)
-
Clone your forked repository:
git clone https://github.com/YOUR-USERNAME/Getting-started-with-Semantic-Kernel.git cd Getting-started-with-Semantic-Kernel -
Create a submission branch:
git checkout -b submission
-
Install the required packages (already defined in the project file):
dotnet restore
-
Create a
.envfile in the root directory by copying the example:cp example.env .env
-
Edit the
.envfile with your Azure OpenAI credentials:MODEL_ID=your-model-id AZURE_OPENAI_ENDPOINT=https://your-resource-name.openai.azure.com/ AZURE_OPENAI_API_KEY=your-api-keyWhere to get these credentials:
- Create an Azure OpenAI resource via Azure AI Foundry or Azure Portal
- Deploy a model (like gpt-4, gpt-35-turbo, etc.)
- In your resource, find your endpoint URL and API key in the "Keys and Endpoint" section
- The MODEL_ID is the name you gave to your deployed model
In Program.cs, we create and configure a Semantic Kernel instance. The kernel is the central component that:
// Create a kernel with Azure OpenAI chat completion
var builder = Kernel.CreateBuilder().AddAzureOpenAIChatCompletion(modelId, endpoint, apiKey);
// Add enterprise components
builder.Services.AddLogging(services => services.AddConsole().SetMinimumLevel(LogLevel.Trace));
// Build the kernel
Kernel kernel = builder.Build();The kernel:
- Connects to AI services (Azure OpenAI in this case)
- Manages plugins and services
- Provides a consistent interface for prompt execution
- Coordinates function calling between the LLM and your code
The LightsPlugin.cs file demonstrates how to create a custom plugin:
[KernelFunction("get_lights")]
[Description("Gets a list of lights and their current state")]
public Task<List<LightModel>> GetLightsAsync()
{
return Task.FromResult(lights);
}
[KernelFunction("change_state")]
[Description("Changes the state of the light")]
public Task<LightModel?> ChangeStateAsync(int id, bool isOn)
{
// ...
}Key aspects of plugins:
- Methods are decorated with
[KernelFunction]to expose them to the kernel [Description]attributes help the LLM understand when to use each function- Return types are serialized and can be understood by the LLM
- Plugins are registered with the kernel:
kernel.Plugins.AddFromType<LightsPlugin>("Lights");
The program implements a chat loop that:
- Takes user input
- Adds it to a conversation history
- Sends the conversation to the LLM through the kernel
- Retrieves and displays the AI's response
- Allows the AI to call plugin functions when needed
Execute the following command in the project directory:
dotnet runThis starts an interactive chat session where you can:
- Ask about the lights (e.g., "What lights do I have?")
- Control the lights (e.g., "Turn on the table lamp")
- Ask general questions (handled by the LLM)
Example conversation:
User > What lights do I have?
Assistant > You have the following lights:
1. Table Lamp (currently OFF)
2. Porch light (currently OFF)
3. Chandelier (currently ON)
User > Turn on the porch light
Assistant > I've turned on the Porch light for you. It's now ON.
To deepen your understanding of Semantic Kernel:
-
Create a new plugin:
- Implement a "Weather" plugin that returns mock weather data
- Register it with the kernel and test queries about the weather
-
Experiment with prompting:
- Try different ways of asking the assistant to control lights
- Notice how the LLM determines when to call functions vs. responding directly
-
Explore function calling:
- Look at the
FunctionChoiceBehavior.Auto()setting and how it affects function calls - Try adding constraints to function parameters
- Look at the
After you've completed the learning activities and made your changes:
- Commit and push your changes to your submission branch
git add . git commit -m "Completed learning activities" git push origin submission
x 2. Create a Pull Request in your forked repository
-
Go to your forked repository on GitHub
-
Click on "Pull Requests" > "New Pull Request"
-
Set the base branch to your main branch (not nisalgunawardhana's main)
-
Set the compare branch to your submission branch
-
Click "Create Pull Request"
-
Add a descriptive title and description
-
Important: Do not merge this pull request
-
Create a pull request from your
submissionbranch to themainbranch on the Your repository.
Follow the above images for a visual guide on creating a pull request.
Tip: After creating your pull request, copy the PR link from your browser's address bar. You will need this link when creating your submission issue in the next step.
Note: The images above demonstrate how to select the correct branches and create a pull request. The repository name shown in the screenshots may differ from yours—just follow the same steps for your own repository.
- Submit your work
- Go to the original repository: https://github.com/nisalgunawardhana/Getting-started-with-Semantic-Kernel
- Create a new issue using the "Submission" issue template
- Fill in:
- Your full name
- Link to your pull request
- Summary of what you learned
- Submit the issue
Once you submit, your work will be reviewed. If you have completed everything correctly, you will be assigned a digital badge for completion.
Semantic Kernel is designed with modular components that enable flexible integration of AI capabilities into your applications. Here’s a deeper look at its architecture:
Connectors abstract the interaction with various AI services, allowing you to switch providers or models with minimal code changes. Supported capabilities include:
- Chat Completion: Enables conversational AI using LLMs (e.g., GPT-4, GPT-3.5).
- Text Generation: Produces text based on prompts, useful for summarization, content creation, etc.
- Embedding Generation: Converts text into vector representations for semantic search and similarity tasks.
- Multimodal Conversion: Handles text, image, and audio processing (where supported by the underlying AI service).
Memory connectors integrate with vector databases to provide persistent, searchable storage for embeddings. They support:
- Storing Embeddings: Save vector representations of text, images, or other data.
- Semantic Search: Retrieve relevant information based on similarity, enabling context-aware responses.
- RAG (Retrieval Augmented Generation): Combine LLMs with external knowledge sources for more accurate and grounded answers.
Semantic Kernel organizes logic into plugins and functions:
- Plugins: Named containers grouping related functions, making them discoverable and callable by the kernel or LLM.
- Functions: Units of work that can be:
- Native code: C# methods (e.g.,
LightsPlugin) exposed via attributes. - OpenAPI specs: Functions generated from API definitions for external service calls.
- Text Search implementations: Custom search logic for semantic memory.
- Prompt templates: Functions defined by prompt instructions, enabling dynamic AI behaviors.
- Native code: C# methods (e.g.,
Prompt templates are reusable instructions that guide LLM behavior. They can include:
- Context and Instructions: Background information and explicit guidance for the model.
- User Input: Dynamic data from users or other sources.
- Function Calls: Embedded calls to plugins/functions for hybrid AI+code workflows.
- Placeholders: Variables replaced at runtime for personalized or context-aware prompts.
Filters are extensibility points for customizing kernel behavior. They allow you to inject logic:
- Before/After Function Invocation: Pre-process inputs or post-process outputs for functions.
- Before/After Prompt Rendering: Modify prompts or results, enabling logging, validation, or transformation.
These components work together to provide a robust framework for building intelligent, extensible applications with AI at their core.
-
"MODEL_ID not found in environment"
- Ensure your
.envfile exists and contains the correct variable names - Check that DotNetEnv is correctly loading the file
- Ensure your
-
Authorization Errors
- Verify your API key is correct
- Ensure your Azure subscription has access to Azure OpenAI
-
Model Not Found
- Confirm you've deployed the model in Azure OpenAI Service
- Make sure the MODEL_ID matches your deployed model name
-
Build Errors
- Run
dotnet restoreto ensure all packages are installed - Check for compatibility issues between package versions
- Run
Have questions, ideas, or want to share your experience?
We welcome you to use GitHub Discussions for:
- Asking questions about setup or usage
- Sharing feedback or suggestions
- Requesting new features
- Connecting with other contributors
👉 Click the "Discussions" tab at the top of this repo to start or join a conversation!
Let's build and learn together!
Follow me on social media for more sessions, tech tips, and giveaways:
- LinkedIn — Professional updates and networking
- Twitter (X) — Insights and announcements
- Instagram — Behind-the-scenes and daily tips
- GitHub — Repositories and project updates
- YouTube — Video tutorials and sessions
Feel free to connect and stay updated!
MIT



