InstructSharp is a high‑performance, provider‑agnostic .NET SDK that turns large‑language‑model requests into one‑line calls and structured JSON responses.
Seamlessly swap between OpenAI ChatGPT, Anthropic Claude, Google Gemini, X.AI Grok, DeepSeek, or Meta LLaMA without rewriting a single line of business logic.
TL;DR – Install the package, define a POCO, call
QueryAsync<T>(), get strongly‑typed results. ✅
- Key Features
- Quick Install
- Hello, World
- Provider Matrix
- Examples
- Advanced Usage
- Performance Notes
- Roadmap
- Contributing
- License
| Feature | Description |
|---|---|
| Multi‑Provider | One unified client for ChatGPT, Claude, Gemini, Grok, DeepSeek, LLaMA – more coming. |
| Strong Typing | Pass any C# POCO → receive a LLMResponse<T> with fully‑deserialized data. |
| Consistent API | Every client exposes QueryAsync<T>(request) so swapping vendors is a one‑line change. |
| JSON Schema Enforcement | Automatic schema generation via NJsonSchema, keeping responses strict & safe. |
| Minimal Setup | Install → add API key → ship. Works in console apps, ASP.NET, Azure Functions, Blazor & more. |
| Full .NET 8 Support | Targets net8.0 but runs on .NET 6/7 via multi‑target NuGet build. |
| Tiny Footprint | Zero reflection at runtime, no heavy AI SDKs pulled in. Pure HTTP + System.Text.Json. |
| Latest Models | Support for cutting-edge models including GPT-4.5, Claude 4, Gemini 2.5, and specialized reasoning models. |
# Package Manager
Install-Package InstructSharp
# .NET CLI
dotnet add package InstructSharpTip – add
--prereleaseto grab nightly builds from CI.
using InstructSharp.Clients.ChatGPT;
using InstructSharp.Core;
class QuestionAnswer
{
public string Question { get; set; }
public string Answer { get; set; }
}
var chat = new ChatGPTClient("YOUR_OPENAI_API_KEY");
var req = new ChatGPTRequest
{
Model = ChatGPTModels.GPT4o,
Instructions = "Talk like a pirate.",
Input = "What is 2 + 2?"
};
var res = await chat.QueryAsync<QuestionAnswer>(req);
Console.WriteLine($"A• {res.Result.Answer}");Want raw text? Simply use string instead of a POCO:
var text = await chat.QueryAsync<string>(req);| Provider | Client Class | Structured JSON | Latest Models | Docs |
|---|---|---|---|---|
| OpenAI ChatGPT | ChatGPTClient |
✅ JSON Schema | GPT-4.5, o1, o3, o4-mini | link |
| Anthropic Claude 4 | ClaudeClient |
✅ Tool Calls | Claude 4 Opus, Claude 3.7 Sonnet | link |
| Google Gemini 2.5 | GeminiClient |
✅ responseJsonSchema |
Gemini 2.5 Pro, Gemini 2.5 Flash | link |
| X.AI Grok 3 | GrokClient |
✅ JSON Schema | Grok 3, Grok 3 Beta | link |
| DeepSeek Chat | DeepSeekClient |
✅ JSON Object | DeepSeek V3, DeepSeek R1 | link |
| Meta LLaMA (DeepInfra) | LLamaClient |
✅ JSON Object | Llama 4 Maverick, Llama 4 Scout | link |
Streaming support is on the roadmap – follow the issues to vote.
InstructSharp comes with comprehensive examples demonstrating various use cases:
Analyzes customer product reviews and extracts structured insights including sentiment, positives/negatives, and recommendations.
Processes customer support tickets to categorize issues, assess urgency, and suggest resolution steps.
Showcases specialized reasoning models (o1, Claude 3.7, DeepSeek R1) solving complex mathematical problems with step-by-step analysis.
Demonstrates the most advanced models from each provider performing comprehensive text analysis with sentiment, themes, and insights.
Streams SSE events from ChatGPT, captures live tool calls via the new builder API, and shows how to deserialize the function arguments you need to execute.
# Run examples
cd Examples
dotnet run --project ProductReviewAnalyzer
dotnet run --project SupportTicketAnalyzer
dotnet run --project AdvancedReasoningExample
dotnet run --project LatestModelsShowcasevar http = new HttpClient {
Timeout = TimeSpan.FromSeconds(15)
};
var chat = new ChatGPTClient(Environment.GetEnvironmentVariable("OPENAI_KEY"), http);HttpClientinjection lets you share retry policies, logging handlers, or proxies.- Add standard headers globally via
DefaultRequestHeaders.
Every request type exposes vendor‑specific knobs such as Temperature, TopP, MaxTokens, etc. Set only what you need – defaults are sane.
try
{
var res = await chat.QueryAsync<MyType>(req);
}
catch(HttpRequestException ex) when (ex.StatusCode == HttpStatusCode.TooManyRequests)
{
// handle 429 rate limit
}LLMResponse<T>.Usage returns prompt, response & total tokens. Use it for cost tracking or throttling.
// OpenAI's latest reasoning model
var o1Request = new ChatGPTRequest { Model = ChatGPTModels.O1Preview };
// Claude's most advanced model
var claudeRequest = new ClaudeRequest { Model = ClaudeModels.ClaudeOpus4_20250514 };
// Gemini's latest Pro model
var geminiRequest = new GeminiRequest { Model = GeminiModels.Gemini25Pro };
// DeepSeek's specialized reasoning model
var deepseekRequest = new DeepSeekRequest { Model = DeepSeekModels.DeepSeekR1_0528 };- No dynamic – all JSON is parsed with
System.Text.Json. Fast. - Schema cache – Generated JSON schemas are cached per‑type to avoid regeneration.
- One HTTP round‑trip – no second prompt to "format JSON"; the schema is sent in the first call.
Benchmarks live under /benchmark – PRs welcome!
- Streaming completions
- Function & tool call helpers
- Automatic retries / exponential back‑off
- Multimodal capabilities (image examination) for all providers (currently OpenAI & Anthropic only)
- DocFX site with full API reference
- Benchmarks vs raw vendor SDKs
Have a feature in mind? Open an issue or send a PR!
- Fork the repo
git clone&dotnet build– tests should pass- Create your branch:
git checkout -b feature/my-awesome - Commit & push, then open a PR
- .NET 8 SDK
- Optional:
direnv/dotenvfor API keys - EditorConfig + Roslyn analyzers enforce style
