DSPy-Go is a native Go implementation of the DSPy framework, bringing systematic prompt engineering and automated reasoning capabilities to Go applications. Build reliable LLM applications through composable modules and workflows.
Full Documentation | API Reference | Examples
| Feature | Description |
|---|---|
| Modular Architecture | Compose simple, reusable components into complex applications |
| Multiple LLM Providers | Anthropic, OpenAI, Google Gemini, Ollama, LlamaCPP, and more |
| Advanced Modules | Predict, ChainOfThought, ReAct, RLM, Refine, Parallel |
| Intelligent Agents | ReAct patterns, ACE framework for self-improving agents |
| A2A Protocol | Multi-agent orchestration with hierarchical composition |
| Smart Tool Management | Bayesian selection, chaining, composition, MCP integration |
| Quality Optimizers | GEPA, MIPRO, SIMBA, BootstrapFewShot, COPRO |
| Structured Output | JSON structured output and XML adapters with security controls |
go get github.com/XiaoConstantine/dspy-gocd cmd/dspy-cli && go build -o dspy-cli
export GEMINI_API_KEY="your-api-key"
./dspy-cli list # See all optimizers
./dspy-cli try mipro --dataset gsm8k # Test optimizer instantly
./dspy-cli view session.jsonl --stats # View RLM session logspackage main
import (
"context"
"fmt"
"github.com/XiaoConstantine/dspy-go/pkg/core"
"github.com/XiaoConstantine/dspy-go/pkg/llms"
"github.com/XiaoConstantine/dspy-go/pkg/modules"
)
func main() {
// Configure LLM
llm, err := llms.NewGeminiLLM("", core.ModelGoogleGeminiPro)
if err != nil {
panic(err)
}
core.SetDefaultLLM(llm)
// Create signature and module
signature := core.NewSignature(
[]core.InputField{{Field: core.NewField("question")}},
[]core.OutputField{{Field: core.NewField("answer")}},
)
cot := modules.NewChainOfThought(signature)
// Execute
result, _ := cot.Process(context.Background(), map[string]interface{}{
"question": "What is the capital of France?",
})
fmt.Println(result["answer"])
}Prefer helper functions such as core.SetDefaultLLM, core.SetTeacherLLM, and core.GetConcurrencyLevel over mutating core.GlobalConfig directly.
Modules resolve their model at execution time in this order: module-local via SetLLM, request-local via core.WithRuntime, then the package default configured with core.SetDefaultLLM.
Define input/output contracts for modules:
signature := core.NewSignature(
[]core.InputField{{Field: core.NewField("question", core.WithDescription("Question to answer"))}},
[]core.OutputField{{Field: core.NewField("answer", core.WithDescription("Detailed answer"))}},
).WithInstruction("Answer accurately and concisely.")| Module | Description |
|---|---|
Predict |
Direct prediction |
ChainOfThought |
Step-by-step reasoning |
ReAct |
Reasoning + tool use |
RLM |
Large context exploration via REPL |
Refine |
Quality improvement through iteration |
Parallel |
Concurrent batch processing |
// JSON structured output
cot := modules.NewChainOfThought(signature).WithStructuredOutput()
// XML adapter (alternative)
interceptors.ApplyXMLInterceptors(predict, interceptors.DefaultXMLConfig())| Guide | Description |
|---|---|
| Getting Started | Installation and first program |
| Core Concepts | Signatures, Modules, Programs |
| Building Agents | ReAct, ACE framework, memory |
| A2A Protocol | Multi-agent orchestration |
| RLM Module | Large context exploration |
| XML Adapters | Structured output parsing |
| Tool Management | Smart registry, chaining, MCP |
| Optimizers | GEPA, MIPRO, SIMBA, Bootstrap |
- ace_basic - Self-improving agents with ACE
- a2a_composition - Multi-agent deep research
- agents - ReAct patterns and orchestration
- rlm - Large context exploration
- rlm_context_policy - Compare
full,checkpointed, andadaptivereplay - rlm_subrlm_budgets - Deterministic sub-RLM direct/total budget demo
- xml_adapter - XML structured output
- parallel - Batch processing
- refine - Quality improvement
- rlm_oolong_gepa - Optimize an adaptive RLM agent, save the optimized program, restore it, and replay it
- smart_tool_registry - Intelligent tool selection
- tool_chaining - Pipeline building
- tool_composition - Composite tools
// Anthropic Claude
llm, _ := llms.NewAnthropicLLM("api-key", core.ModelAnthropicSonnet)
// Google Gemini
llm, _ := llms.NewGeminiLLM("api-key", core.ModelGoogleGeminiPro)
// OpenAI
llm, _ := llms.NewOpenAI(core.ModelOpenAIGPT4, "api-key")
// Ollama (local)
llm, _ := llms.NewOllamaLLM(core.ModelOllamaLlama3_8B)
// OpenAI-compatible (LiteLLM, LocalAI, etc.)
llm, _ := llms.NewOpenAILLM(core.ModelOpenAIGPT4,
llms.WithAPIKey("api-key"),
llms.WithOpenAIBaseURL("http://localhost:4000"))- Documentation: xiaocui.me/dspy-go
- API Reference: pkg.go.dev
- Example App: Maestro - Code review agent
DSPy-Go is released under the MIT License. See the LICENSE file for details.