The agent framework Swift has been missing. Chain LLMs, tools, and memory into production workflows β with compile-time safety, crash recovery, and on-device inference.
let result = try await (fetchAgent --> reasonAgent --> writerAgent)
.run("Summarize the WWDC session on Swift concurrency.")Three agents. One line. Compiled to a DAG. Crash-resumable. Zero data races.
If Swarm saves you time, a star helps others find it.
.package(url: "https://github.com/christopherkarani/Swarm.git", from: "0.3.4")Or in Xcode: File β Add Package Dependencies β paste the URL above.
import Swarm
// 1. Define a tool β the @Tool macro generates the JSON schema for you
@Tool("Looks up the current stock price")
struct PriceTool {
@Parameter("Ticker symbol") var ticker: String
func execute() async throws -> String { "182.50" }
}
// 2. Create an agent with tools
let agent = Agent(
name: "Analyst",
tools: [PriceTool()],
instructions: "Answer finance questions using real data."
)
// 3. Run it
let result = try await agent
.environment(\.inferenceProvider, .anthropic(key: "sk-..."))
.run("What is AAPL trading at?")
print(result.output) // "Apple (AAPL) is currently trading at $182.50."That's a working agent with tool calling. Keep reading for multi-agent orchestration, memory, guardrails, and more.
Swift 6.2 StrictConcurrency is enabled on every target. Non-Sendable types crossing actor boundaries won't build. Period.
Every orchestration compiles to a Hive DAG with automatic checkpointing. A 10-step pipeline that crashes on step 7 resumes from step 7.
Eleven composable step types in a SwiftUI-style result builder:
fetchAgent --> analyzeAgent --> writerAgent // Sequential chain
Pipeline<String, [String]> { ... } >>> Pipeline { } // Type-safe pipeline
DAGNode("write", agent: w).dependsOn("fetch", "ref") // Dependency graphFoundation Models, Anthropic, OpenAI, Ollama, Gemini, MLX. Swap providers with one line:
agent.environment(\.inferenceProvider, .foundationModels) // On-device, private
agent.environment(\.inferenceProvider, .anthropic(key: k)) // CloudMulti-agent pipeline with guardrails
struct ResearchPipeline: AgentBlueprint {
let researcher = Agent(name: "Researcher", tools: [WebSearch()])
let writer = Agent(name: "Writer", instructions: "Write clear summaries.")
@OrchestrationBuilder var body: some OrchestrationStep {
Guard(.input) {
InputGuard("no_pii") { input in
input.contains("SSN") ? .tripwire(message: "PII detected") : .passed()
}
}
researcher --> writer
}
}
let result = try await ResearchPipeline()
.environment(\.inferenceProvider, provider)
.run("Latest advances in on-device ML")Parallel fan-out with merged results
let group = ParallelGroup(
agents: [
(name: "optimist", agent: bullAgent),
(name: "pessimist", agent: bearAgent),
(name: "neutral", agent: analystAgent),
],
mergeStrategy: MergeStrategies.Concatenate(separator: "\n\n---\n\n"),
maxConcurrency: 3
)
let result = try await group.run("Evaluate Apple's Q4 earnings.")
// All three perspectives, merged into one outputSemantic memory β on-device SIMD, no cloud API
let memory = VectorMemory(
embeddingProvider: myEmbedder,
similarityThreshold: 0.75,
maxResults: 8
)
await memory.add(.user("The project deadline is March 15."))
// Retrieves the deadline despite different phrasing
let context = await memory.context(for: "When is this due?", tokenLimit: 1_200)Supervisor routing β LLM picks the right agent
let supervisor = SupervisorAgent(
agents: [
(name: "math", agent: mathAgent, description: "Arithmetic and calculations"),
(name: "weather", agent: weatherAgent, description: "Weather forecasts"),
(name: "code", agent: codeAgent, description: "Programming help"),
],
routingStrategy: LLMRoutingStrategy(inferenceProvider: provider)
)
let result = try await supervisor.run("What is 15% of $240?")
// Automatically routes to mathAgentProduction resilience β retry, circuit breaker, fallback, timeout
let agent = myAgent
.withRetry(.exponentialBackoff(maxAttempts: 3, baseDelay: .seconds(1)))
.withCircuitBreaker(threshold: 5, resetTimeout: .seconds(60))
.withFallback(Agent(instructions: "Service temporarily unavailable."))
.withTimeout(.seconds(30))Streaming β real-time token output
for try await event in (fetchAgent --> writerAgent).stream("Summarise the changelog.") {
switch event {
case .outputToken(let token): print(token, terminator: "")
case .toolCalling(let call): print("\n[tool: \(call.toolName)]")
case .completed(let result): print("\nDone in \(result.duration)")
default: break
}
}| Swarm | LangChain | AutoGen | |
|---|---|---|---|
| Language | Swift 6.2 | Python | Python |
| Data race safety | Compile-time | Runtime | Runtime |
| On-device LLM | Foundation Models | β | β |
| Execution engine | Compiled DAG (Hive) | Loop-based | Loop-based |
| Crash recovery | Automatic checkpoints | β | Partial |
| Type-safe tools | @Tool macro (compile-time) |
Decorators (runtime) | Runtime |
| Streaming | AsyncThrowingStream |
Callbacks | Callbacks |
| iOS / macOS native | First-class | β | β |
| Agents | Tool-calling Agent, ReActAgent, PlanAndExecuteAgent, SupervisorAgent, ChatAgent |
| Orchestration | 11 step types: Sequential, Parallel, DAG, Router, Branch, Guard, Transform, Pipeline, RepeatWhile, SequentialChain, ParallelGroup |
| Memory | Conversation, Vector (SIMD/Accelerate), Summary (LLM-compressed), Hybrid, Persistent (SwiftData) |
| Tools | @Tool macro with auto-generated JSON schema, FunctionTool, ToolChain, parallel execution |
| Guardrails | Input, output, tool-input, tool-output validators with tripwire and warning modes |
| Resilience | Retry (7 backoff strategies), circuit breaker, fallback chains, rate limiting, timeouts |
| Observability | OSLogTracer, SwiftLogTracer, span-based tracing, per-agent token metrics |
| MCP | Model Context Protocol β both client (consume tools) and server (expose tools) |
| Providers | Foundation Models, Anthropic, OpenAI, Ollama, Gemini, OpenRouter, MLX via Conduit |
| Macros | @Tool, @Parameter, @AgentActor, @Traceable, #Prompt, @Builder |
βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
β Your Application β
β iOS 26+ Β· macOS 26+ Β· Linux (Ubuntu 22.04+) β
βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ€
β AgentBlueprint Β· .run() Β· .stream() β
ββββββββββββββββββββββββ¬βββββββββββββββββββββββββββββββββββββββ€
β Orchestration DSL β Step Types β
β @resultBuilder β Sequential Β· Parallel Β· DAGWorkflow β
β --> Β· >>> β Router Β· RepeatWhile Β· Branch β
β .dependsOn() β Guard Β· Transform Β· HumanApproval β
ββββββββββββββββββββββββ΄βββββββββββββββββββββββββββββββββββββββ€
β Agents Memory Tools β
β Agent (tool-call) Conversation @Tool macro β
β ReActAgent VectorMemory FunctionTool β
β PlanAndExecute SummaryMemory AnyJSONTool ABI β
β SupervisorAgent HybridMemory ToolChain β
βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ€
β Guardrails Β· Resilience Β· Observability Β· MCP β
βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ€
β Hive Runtime (HiveCore) β
β Compiled DAG Β· Checkpointing Β· Deterministic retry β
βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ€
β InferenceProvider (pluggable) β
β Foundation Models Β· Anthropic Β· OpenAI Β· Ollama Β· MLX β
βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
| Complete API Reference | Every type, protocol, and API β with examples |
| Agents | Agent types, configuration, @AgentActor macro |
| Tools | @Tool macro, FunctionTool, runtime toggling |
| DSL & Blueprints | AgentBlueprint, @OrchestrationBuilder, modifiers |
| Orchestration | DAG, parallel, chains, human-in-the-loop |
| Handoffs | Agent handoffs, routing, SupervisorAgent |
| Memory | Conversation, Vector, Summary, SwiftData backends |
| Streaming | AgentEvent streaming, SwiftUI integration |
| Guardrails | Input/output validation, tripwires |
| Resilience | Retry, circuit breakers, fallback, timeouts |
| Observability | Tracing, OSLogTracer, SwiftLogTracer, metrics |
| MCP | Model Context Protocol client and server |
| Providers | Inference providers, MultiProvider routing |
| Migration Guide | Upgrading between versions |
| Swift | 6.2+ |
| iOS | 26.0+ |
| macOS | 26.0+ |
| Linux | Ubuntu 22.04+ with Swift 6.2 |
Foundation Models require iOS 26 / macOS 26. Cloud providers (Anthropic, OpenAI, Ollama) work on any Swift 6.2 platform including Linux.
- Fork β branch β
swift testβ PR - All public types must be
Sendableβ the compiler enforces it - Format with
swift package plugin --allow-writing-to-package-directory swiftformat
GitHub Issues Β· Discussions Β· @ckarani7
MIT License β see LICENSE.