Rust port of the OpenSymbolicAI GoalSeeking framework — an agent framework that uses LLM-generated Python code to solve complex research problems.
The framework orchestrates iterative PLAN → EXECUTE → UPDATE → EVALUATE loops where an LLM generates Python code plans, which execute in a sandboxed environment with access to Rust-defined primitives.
opensymbolic/ # Core library crate
├── src/ # Framework modules (context, plan, llm, executor, sandbox, etc.)
└── examples/ # Three graduated example agents
opensymbolic-macros/ # Proc macro crate (#[primitive], #[decomposition], #[derive(GoalSeeking)])
Key concepts:
- Primitives — Rust functions annotated with
#[primitive]that the LLM can call from generated plans - Decomposition examples — Few-shot Python snippets shown to the LLM so it learns how to compose primitives
- GoalSeeking loop — Iterates until the goal is achieved or budget is exhausted
| # | Name | Description | LLM Required |
|---|---|---|---|
| 01 | Counter Agent | Deterministic agent that counts to a target. Validates core framework mechanics. | No (MockLlm) |
| 02 | Quiz Agent | Generates trivia questions, answers them, tracks score. Demonstrates decomposition examples and optional args. | No (MockLlm) |
| 03 | Deep Research | Full research workflow: search → extract → summarize → synthesize → identify gaps. Matches the Python reference implementation. | Yes (Groq API) |
Run tests for a specific example:
cargo test --example 01_counter
cargo test --example 02_quiz
cargo test --example 03_deep_researchRun the Deep Research agent interactively (requires API keys):
export GROQ_API_KEY="your-groq-api-key"
export TAVILY_API_KEY="your-tavily-api-key"
cargo run --example 03_deep_research -- "What are the latest advances in quantum computing?" --iterations 3 --model openai/gpt-oss-120bArguments for 03_deep_research:
- First positional arg: the research query
--iterations N— max goal-seeking iterations (default: 3)--model NAME— Groq model name (default:openai/gpt-oss-120b)
Examples 01 and 02 use
MockLlmand run entirely offline. Example 03 integration tests are marked#[ignore]and require real API keys.
Requires Rust 1.70+ (edition 2021). Install via rustup:
curl --proto '=https' --tlsv1.2 -sSf https://sh.rustup.rs | sh
source $HOME/.cargo/envcurl --proto '=https' --tlsv1.2 -sSf https://sh.rustup.rs | sh
source $HOME/.cargo/envOr via Homebrew:
brew install rustup-init
rustup-initDownload and run rustup-init.exe from the official site, then restart your terminal.
Required for plan execution via PyO3. The embedded interpreter links against your system Python at build time.
# Debian/Ubuntu
sudo apt install python3 python3-dev
# Fedora
sudo dnf install python3 python3-devel
# Arch
sudo pacman -S pythonbrew install python@3Or use the system Python 3 (pre-installed on recent macOS).
Install Python 3 from python.org or via:
winget install Python.Python.3.12Ensure python3 is on your PATH and that you have the development headers (included by default on Windows).
- Linux: The sandbox uses kernel namespaces and seccomp. Sandbox-related dependencies (
nix,libc) are only compiled on Linux. - macOS / Windows: Builds and runs without the sandbox (uses
ExecutionMode::Direct). All examples work, but plan execution is not isolated.
| Crate | Purpose |
|---|---|
pyo3 |
Embedded Python interpreter for executing LLM-generated plans |
reqwest |
HTTP client for Groq LLM and Tavily search APIs |
serde / serde_json |
Serialization for schemas and IPC messages |
thiserror |
Error type derivation |
nix (Linux only) |
Namespace and seccomp sandbox primitives |
syn / quote / proc-macro2 |
Proc macro infrastructure |
clap (dev) |
CLI argument parsing for example 03 |
chrono (dev) |
Date handling in example 03 |
git clone https://github.com/your-org/OpenSymbolicAI-examples-rs.git
cd OpenSymbolicAI-examples-rs
cargo buildRun all tests:
cargo testRun all tests including integration tests (requires API keys):
cargo test -- --include-ignored