You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
FINRA's 2026 Annual Regulatory Oversight Report has an entire section on agentic AI. It's not vague hand-waving about "responsible AI" either. The language is operational.
Some direct quotes:
"AI agents acting autonomously without human validation and approval" is flagged as an autonomy risk
Firms should consider "where to have 'human in the loop' agent oversight protocols or practices"
They want firms to know "how to track agent actions and decisions"
And "how to establish guardrails or control mechanisms to limit or restrict agent behaviors, actions or decisions"
"Agents may act beyond the user's actual or intended scope and authority"
This maps almost 1:1 to what we've built at SidClaw:
FINRA 2026 Language
SidClaw Primitive
"human validation and approval"
Approval workflow — action holds until a human approves or denies
"track agent actions and decisions"
Hash-chain trace — every action, approval, and denial is cryptographically chained
"guardrails or control mechanisms"
Policy engine — priority-ordered rules that allow, deny, flag, or log each action
"limit or restrict agent behaviors"
Scoped agent identities with per-agent policy sets
"human in the loop oversight protocols"
Context-rich approval cards showing payload, reasoning, and risk classification
What's notable here is the specificity. FINRA isn't saying "think about AI governance." They're saying: validate before execution, track what happened, restrict what agents can do. That's an approval workflow, an audit trail, and a policy engine. Three concrete primitives.
Rule 3110 (Supervision) already requires "a reasonably designed supervisory system tailored to its business." FINRA is now applying that standard to AI agents. If your agents can act or transact without documented human checkpoints, that's a supervision gap under existing rules — not a new hypothetical.
For anyone building agents in financial services: what does your current approval workflow look like? Are you building this in-house or evaluating third-party tooling?
reacted with thumbs up emoji reacted with thumbs down emoji reacted with laugh emoji reacted with hooray emoji reacted with confused emoji reacted with heart emoji reacted with rocket emoji reacted with eyes emoji
Uh oh!
There was an error while loading. Please reload this page.
-
FINRA's 2026 Annual Regulatory Oversight Report has an entire section on agentic AI. It's not vague hand-waving about "responsible AI" either. The language is operational.
Some direct quotes:
This maps almost 1:1 to what we've built at SidClaw:
allow,deny,flag, orlogeach actionWhat's notable here is the specificity. FINRA isn't saying "think about AI governance." They're saying: validate before execution, track what happened, restrict what agents can do. That's an approval workflow, an audit trail, and a policy engine. Three concrete primitives.
Rule 3110 (Supervision) already requires "a reasonably designed supervisory system tailored to its business." FINRA is now applying that standard to AI agents. If your agents can act or transact without documented human checkpoints, that's a supervision gap under existing rules — not a new hypothetical.
Source: https://www.finra.org/rules-guidance/guidance/reports/2026-finra-annual-regulatory-oversight-report/gen-ai
For anyone building agents in financial services: what does your current approval workflow look like? Are you building this in-house or evaluating third-party tooling?
Beta Was this translation helpful? Give feedback.
All reactions