Releases: OpenSymbolicAI/core-py
Releases · OpenSymbolicAI/core-py
v0.7.0
What's New
PromptProvider — select which primitives and decompositions appear in prompts
New PromptProvider base class that controls which primitives and decompositions the LLM sees in the prompt. Subclass it and override select_primitives / select_decompositions to filter by rich metadata (PrimitiveInfo, DecompositionInfo) including docstrings, read_only, deterministic, parameters, return types, and source code.
from opensymbolicai import PromptProvider, PrimitiveInfo, PlanExecuteConfig
class ReadOnlyOnly(PromptProvider):
def select_primitives(self, available: list[PrimitiveInfo]) -> list[str]:
return [p.name for p in available if p.read_only]
config = PlanExecuteConfig(prompt_provider=ReadOnlyOnly())- Only affects prompt construction — all primitives remain available for execution
- Works across all blueprints:
PlanExecute,DesignExecute,GoalSeeking - New exported models:
PromptProvider,PrimitiveInfo,DecompositionInfo,ParameterInfo
Full Changelog: v0.6.2...v0.7.0
v0.6.2
What's Changed
New Features
- Anonymous usage telemetry via PostHog (
feat: add anonymous usage telemetry) - API key auth support for observability HTTP transport (
feat: add API key auth support to observability HTTP transport)
Bug Fixes
- Pair LLM request/response into single deferred spans for correct trace duration (
fix: pair LLM request/response into single deferred spans) - Allow
isinstancein plan execution — removed fromDANGEROUS_BUILTINSand added toDEFAULT_ALLOWED_BUILTINS, withallowed_builtinsnow checked before the blocklist so user config can override (fix: allow isinstance in plan execution)
Full Changelog: v0.5.1...v0.6.2
v0.5.1
What's Changed
Features
- Observability framework — new tracing and event transport system (
FileTransport,HttpTransport,InMemoryTransport) - GoalSeeking observability instrumentation with
session_idsupport - Determinism metadata, signature hashing, and prompt splitting APIs
- Prompt section demarcation markers for splittable LLM prompts
- Auto-document Pydantic BaseModel schemas in primitive prompts
Docs & Maintenance
- Improved PyPI discoverability and added benchmark results to README
- Updated README with DesignExecute, GoalSeeking, and exception documentation
Full Changelog: v0.4.1...v0.5.1
v0.4.1
What's in this release
OpenSymbolicAI Core — a Python framework for building symbolic AI agents where LLMs plan and typed Python functions execute.
Blueprints
- PlanExecute — LLM generates a plan, runtime executes it against
@primitivefunctions - DesignExecute — control flow support (loops, conditionals) for multi-step workflows
- GoalSeeking — iterative goal-directed agents with built-in evaluation and retry
Highlights
- Symbolic firewall: LLM never touches raw data, only function signatures
- All outputs are validated Pydantic models
- Full execution tracing with before/after state snapshots
- Multi-provider LLM support (Ollama, OpenAI, Anthropic, Fireworks, Groq)
- Checkpoint system for distributed execution and state persistence
Examples
- Scientific calculator with 120-intent benchmark suite
- Function optimizer using GoalSeeking
- Shopping cart using DesignExecute
Benchmark Results (calculator, 120 intents)
| Model | Accuracy |
|---|---|
| gpt-oss:20b | 100% |
| qwen3:1.7b | 100% |
| qwen3:8b | 100% |
| gemma3:4b | 94% |
| phi4:14b | 80% |
Full documentation: https://opensymbolicai.github.io/core-py/