You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
reacted with thumbs up emoji reacted with thumbs down emoji reacted with laugh emoji reacted with hooray emoji reacted with confused emoji reacted with heart emoji reacted with rocket emoji reacted with eyes emoji
Uh oh!
There was an error while loading. Please reload this page.
-
AI systems are rapidly entering permissioned domains:
Today, most tooling focuses on observability.
But observability only records what the system claims happened.
It does not guarantee:
We built AAR-MCP-2.0 as a minimal prototype of an AI Evidence Layer.
You can reproduce the attack demo locally:
cd demo/high_risk_authority
./run.sh
Original log -> VERIFY_OK
Tampered log -> Merkle mismatch
The core question:
Should high-risk AI actions require cryptographic receipts?
Is an Evidence Layer necessary once AI crosses into permission domains?
We welcome technical critique and alternative approaches.
Beta Was this translation helpful? Give feedback.
All reactions