Releases: RightNow-AI/openfang
v0.5.7 — Multi-Instance Hands + Critical Fixes
Headline: Multi-Instance Hands
Customer ask (thank you to the Discord community + issue #878): you can now run multiple instances of the same hand type. Just pass an optional instance_name when activating.
Web UI: new "Instance name (optional)" field in the Setup wizard.
CLI: openfang hand activate clip --name clip-youtube
API: POST /api/hands/clip/activate with {"instance_name": "clip-youtube", "config": {}}
Each named instance gets a unique stable agent id derived from hand_instance_{instance_id}. Activating the same (hand_id, instance_name) pair twice is rejected. Unnamed activations keep the legacy one-per-hand behavior.
openfang hand activate clip --name clip-youtube
openfang hand activate clip --name clip-tiktok
# Both running in parallel, each with its own agentCritical bug fixes
-
#919 [SECURITY]
rmbypass in Allowlist mode closed. Theprocess_starttool previously skippedvalidate_command_allowlist, letting LLMs delete files even whenrmwasn't inallowed_commands. Bothcommandandargsare now validated for metacharacters and allowlist membership. 5 regression tests added. -
#1013 Moonshot session repair.
session_repair::validate_and_repairnow runsdeduplicate_tool_resultsBEFOREinsert_synthetic_results. Fixes Moonshot's non-uniquefunction_name:indextool_call_id format — orphaned ToolUse blocks get synthetic results after dedup. -
#1003 Global
[[fallback_providers]]actually used at runtime.resolve_drivernow wraps the primary in aFallbackDriverwith the full fallback chain at driver-creation time. Network errors (connection refused, timeout) escalate to fallback instead of looping on the dead primary. Two new regression tests. -
#937 Discord gateway heartbeat. Discord adapter now spawns a heartbeat task after HELLO, tracks the sequence number, handles HEARTBEAT_ACK (op 11), detects zombie connections via an ACK gate, and force-closes the socket to reconnect when the server stops ACKing. Credits @hello-world-bfree for PR #938 that flagged the root cause.
-
#935 System prompt no longer leaks in Web UI.
GET /api/agents/:id/sessionnow filtersRole::Systemmessages by default (opt-in debug via?include_system=true). Defense-in-depth client-side filter inchat.jstoo. Integration test asserts the system prompt literal does not appear in the default JSON body. -
#984 Custom hands persist across daemon restart.
openfang hand install ./pathnow copies the hand to~/.openfang/hands/<hand_id>/, and the kernel scans that directory on startup to reload custom hands. Newload_workspace_handsmethod mirrors theload_workspace_skillspattern. -
#884 Version stamp fixed. Workspace version bumped to
0.5.7. CLIopenfang --versionand API/api/healthboth correctly report0.5.7. Previous releases were stamped with the pre-bump0.5.5.
Cleanup
- rmcp 1.3 builder API adopted for
StreamableHttpClientTransportConfig. Cleaner than field-assignment-after-default. Credits @jefflower (PR #986) and @varpress (PR #927). - Task tracking and live daemon verification integrated into the fix workflow — every fix verified with real HTTP + Groq calls before ship.
Verified end-to-end
Before shipping, each fix was verified against a real daemon running with Groq:
openfang --version→openfang 0.5.7✅/api/health→{"status":"ok","version":"0.5.7"}✅- Two
clipinstances (clip-youtube + clip-tiktok) active simultaneously with different agent_ids ✅ - Third activation of
clip-tiktokcorrectly rejected with "Hand already active" ✅ - Groq round-trip
say PONG→PONG✅
Stats
- 22 files changed, 1315 insertions, 154 deletions
- Full workspace test suite (1800+ tests) green
- Five reviewer agents audited all overlapping community PRs before merge
Full Changelog: v0.5.6...v0.5.7
v0.5.6
Critical Fix
- Version sync: Desktop app and workspace version now correctly report v0.5.5+. Users stuck on v0.5.1 should be able to update. Tauri config was hardcoded at 0.1.0 since initial commit.
New Features
-
SSRF allowlist: Self-hosted/K8s users can now configure
ssrf_allowed_hostsin config.toml to allow agents to reach internal services. Metadata endpoints (169.254.169.254, etc.) remain unconditionally blocked.[tools.web_fetch] ssrf_allowed_hosts = ["*.olares.com", "10.0.0.0/8"]
-
Expanded embedding auto-detection: Now probes 6 API key providers (OpenAI, Groq, Mistral, Together, Fireworks, Cohere) before falling back to local providers (Ollama, vLLM, LM Studio). Clear warning when no embedding provider is available.
Bug Fixes
- Ollama context window: Discovered models now default to 128K context / 16K output (was 32K/4K). Better reflects modern models like Qwen 3.5.
Full Changelog: v0.5.5...v0.5.6
v0.5.5
Bug Fixes
- #771 Qwen/OpenAI-compat tool_calls orphaning after context overflow. Smart drain boundaries + streaming repair.
- #811 LINE webhook signature validation. Raw bytes for HMAC, secret trimming, debug logging.
- #752 Local skill install: TUI parsing fix, hot-reload via /api/skills/reload, ClawHub reload.
- #772 exec_policy mode=full now bypasses approval gate for shell_exec.
- #661 Chat streaming interrupts (closed as resolved by v0.5.3 reactivity fixes).
Full Changelog: v0.5.4...v0.5.5
v0.5.4
Bug Fixes
- #875 Install script now correctly fetches latest release version
- #872 Session endpoint returns full tool results (removed 2000-char truncation)
- #867 agent_send/agent_spawn timeout increased to 600s (was 120s)
- #824 Doctor correctly counts workspace skills that override bundled skills
- #833 Model switching respects explicit provider via find_model_for_provider()
- #766 Closed as resolved by heartbeat fixes
Stats
- All tests passing
- Live tested with daemon
Full Changelog: v0.5.3...v0.5.4
v0.5.3 — 19 Bug Fixes (3 rounds)
What's Changed
This release resolves 19 bugs across runtime, kernel, CLI, Web UI, and hands — all verified with live daemon testing.
Runtime & Drivers
- #834 Remove 3 decommissioned Groq models (
gemma2-9b-it,llama-3.2-1b/3b-preview) - #805 Ollama streaming parser handles both
reasoning_contentandreasoningfields - #845 Model fallback chain retries with
fallback_modelson ModelNotFound (404) - #785 Gemini streaming SSE parser handles
\r\nline endings — fixes infinite empty retry loop - #774
tool_use.inputalways normalized to JSON object — fixes Anthropic API "invalid dictionary" errors - #856 Custom model names preserved — user-defined models take priority over builtins (vLLM, etc.)
Kernel & Heartbeat
- #844 Heartbeat skips idle agents that never received a message — no more crash-recover loops
- #848 Hand continuous interval changed from 60s to 3600s — prevents credit waste
- #851/#808 Global
~/.openfang/skills/loaded for all agents; workspace skills properly override globals
CLI
- #826
openfang doctorreportsall_ok=falsewhen provider key is rejected (401/403) - #823
doctor --jsonoutputs clean JSON to stdout, tracing to stderr, BrokenPipe handled - #825 Doctor surfaces blocked workspace skills count in injection scan (no more false "all clean")
- #828
skill installdetects Git URLs (https://,git@) and clones before installing
Web Dashboard
- #767 Workflows page scrollable (flex layout fix)
- #802 Model dropdown handles object options — no more
[object Object]for Ollama - #816 Spawn wizard provider dropdown loads dynamically from
/api/providers(43 providers) - #770 Chat streaming renders in real-time (Alpine.js splice reactivity + stale WS guard)
WebSocket & API
- #836 Tool events include
idfield for concurrent call correlation
Hands
- #820 Browser Hand checks
python3beforepython— works on modern Linux distros
Stats
- 2,186+ tests passing, zero clippy warnings
- All fixes verified with live daemon testing
Full Changelog: v0.5.1...v0.5.3
v0.5.2 — 12 Bug Fixes
What's Changed
Bug Fixes (12 issues resolved)
Runtime & Drivers
- #834 Remove 3 decommissioned Groq models (
gemma2-9b-it,llama-3.2-1b-preview,llama-3.2-3b-preview) - #805 Ollama streaming parser now handles both
reasoning_contentandreasoningfields for thinking models (Qwen 3.5, etc.) - #845 Model fallback chain now retries with configured
fallback_modelson ModelNotFound (404) instead of panicking
Kernel & Heartbeat
- #844 Heartbeat monitor skips idle agents that never received a message — no more infinite crash-recover loops
- #848 Hand continuous mode interval changed from 60s to 3600s to prevent credit waste on idle polling
CLI (Doctor)
- #826
openfang doctornow reportsall_ok=falsewhen a provider key is rejected (401/403) - #823
openfang doctor --jsonoutputs clean JSON to stdout (tracing goes to stderr), BrokenPipe handled gracefully
Web Dashboard
- #767 Workflows list page is now scrollable (flex layout fix)
- #802 Model dropdown no longer renders
[object Object]for Ollama models - #816 Agent spawn wizard provider dropdown loads dynamically from
/api/providers(43 providers, was hardcoded 18) - #836 WebSocket tool events now include tool call ID for correct concurrent call correlation
Hands
- #820 Browser Hand requirements check now tries
python3beforepython, fixing detection on modern Linux distros
Stats
- All 829+ tests passing
- Zero clippy warnings
- Live tested with daemon
Full Changelog: v0.5.1...v0.5.2
v0.5.1 — Community Contributions
9 community PRs merged after strict review (24 PRs reviewed, 11 rejected, 4 closed).
Fixes
- Dashboard settings page loading state fix (#750)
- KaTeX loaded on demand to prevent first-paint blocking (#748)
- Provider model normalization — display names resolve through catalog (#714)
- Invisible approval requests now visible with history, badge, and polling (#713)
- Matrix
auto_accept_invitesnow configurable, defaults to false (security) (#711)
Dependencies
- docker/build-push-action 6 → 7 (#741)
- docker/setup-buildx-action 3 → 4 (#740)
- roxmltree 0.20 → 0.21 (#744)
- zip 2.4 → 4.6 (#742)
Full diff: v0.5.0...v0.5.1
v0.5.0 — Milestone Release
29 bugs fixed, 6 features shipped, 100+ PRs reviewed, 65+ issues resolved.
Features
- Image generation pipeline (DALL-E/GPT-Image)
- WeCom channel adapter
- Docker sandbox runtimes
- Shell skill runtime
- Slack unfurl links support
- Release-fast build profile
Improvements
- Channel agent re-resolution
- Stable hand agent IDs
- Async session save
- Vault wiring for credentials
- Telegram formatting improvements
- Mastodon polling fix
- Chromium no-sandbox root support
- Tool error guidance in agent loop
- Agent rename fix
- Codex id_token support
Community
- Community docs and fixes (multiple rounds)
- WhatsApp setup documentation
- CI action bumps
- Docker build args
- Lockfile sync
- Docs link fixes
Full diff: v0.4.3...v0.5.0
v0.4.9
v0.4.9
Bug Fixed
- Image pipeline (#686): REST API and WebSocket now pass image attachments as
content_blocksdirectly to the LLM viasend_message_with_handle_and_blocks()/send_message_streaming(). Previously images were injected as a separate session message and never reached vision models in the current turn. All 3 API entry points (REST, WebSocket, channels) now use the same flow.
Docs
- Added community troubleshooting FAQ: Docker setup, Caddy basicauth, embedding model config, email allowed_senders, Z.AI/GLM-5 config, Kimi 2.5, OpenRouter free models, Claude Code integration, trader hand permissions, multiple Telegram bots workaround.
Full changelog since v0.4.4
26 bugs fixed, 6 features shipped, 100+ PRs reviewed, 65+ issues resolved across v0.4.4–v0.4.9.
v0.4.8
v0.4.8
Bugs Fixed
- Fix HandCategory TOML parse error — added Finance + catch-all Other variant (#717)
- Fix LINE token detection heuristic — long tokens (>80 chars) recognized as direct values (#729)
- Fix General Assistant max_iterations too low — bumped from 50 to 100 (#719)
- Fix knowledge_query SQL parameter binding mismatch (#638)
- Fix WhatsApp Cloud API silently swallowing send errors (#707)
- Fix dashboard provider dropdown missing local providers (#683)
Previous (v0.4.5–v0.4.7)
- Fix Gemini infinite loop on Thinking-only responses (#704)
- Fix tool_blocklist not detected on daemon restart (#666)
- Fix MCP credentials from .env/vault (#660)
- Fix image base64 compaction storms (#648)
- Fix phantom action hallucination (#688)
- Fix desktop app .env loading (#687)
- Fix duplicate sessions (#651)
- Fix Anthropic null tool_use input (#636)
- Fix temperature for reasoning models (#640)
- Fix OpenRouter prefix on fallbacks (#630)
- Fix streaming metering persistence (#627)
- Fix MCP dash names (#616)
- Fix deepseek-reasoner multi-turn (#618)
- Fix NO_REPLY leak to channels (#614)
- Fix skill install button (#625)
- Fix cron delivery (#601)
Features
- Azure OpenAI provider (#631)
- LaTeX rendering in chat (#622)
- PWA support (#621)
- WeCom channel adapter (#629)
- Shell/Bash skill runtime (#624)
- DingTalk Stream adapter (#353)
- Feishu/Lark unified adapter (#329)
- Parakeet MLX speech-to-text (#607)
- Codex GPT-5.4 (#608)
- 100+ community PRs reviewed and merged