The following APIs are intended to be the long-term integration surface for the 0.2.x line:
guard_model_request()review_model_response()protect_model_call()protect_with_adapter()ToolPermissionFirewallRetrievalSanitizer
These contracts are also exposed in CORE_INTERFACES so applications can log or assert the expected interface version.
- Added richer multimodal/message-part handling for mixed text, image, and file content
- Added provider adapters for OpenAI, Anthropic, Gemini, and OpenRouter
- Added presets and route-level policy overrides
- Added custom prompt detector hooks for domain tuning
- Expanded rollout guidance, benchmarks, and regression notes
- Added identity-aware telemetry enrichment for SSO-backed applications
- Added Power BI-friendly export helpers and telemetry exporter hooks
- Expanded summaries to support user-level and identity-provider reporting
- Added explicit guidance for controlled-pilot rollout and internal wrapper adoption
- Added documented workflow presets for planner, document-review, RAG-search, and tool-calling routes
- Added stronger docs for route-level telemetry review, false-positive tuning, and release-noise checks
- If you previously passed message content as arrays of parts, 0.1.4 now preserves those parts in
content_partswhile still producing the text view incontent. - If you were wrapping providers manually, prefer
protect_with_adapter()plus the adapter factories inblackwall_llm_shield.providers. - If you want conservative rollout, switch to
preset="shadow_first"before enabling hard blocking on every route. - If you already have an internal model-security abstraction, prefer wrapping Blackwall behind that layer and migrating route by route.
- Existing string-based
messages[].contentflows remain supported. - Existing
guard_model_request()andOutputFirewallusage remain backward-compatible.