// Risk
Tool execution and side effects
Agents chain tools: APIs, shells, browsers, infrastructure APIs. Each step can cause irreversible side effects. Governance must sit where tool calls become commits, see AI system control layer and pre-execution authorization.
// Shift
From recommendation to execution
Many systems still optimize for answer quality. Production failures increasingly come from execution: a transfer, a deploy, a data export, a privilege change. That shift is why category pages center on AI execution governance, not generic “AI safety” essays.
// Limits
Why common controls are incomplete alone
Monitoring, human review, rate limits, and sandboxes reduce risk but do not replace a hard gate at commit time when agents run unattended. For fail-mode semantics, see fail-closed AI systems; for policy kernels, policy enforcement engine.
// Stack diagram
Execution governance stack (agents)
One reference diagram for the control loop (SVG, searchable). The pillar hub keeps a lighter path figure to avoid repeating the same asset everywhere.
// Control layer
Where TrigGuard sits
TrigGuard is an authorization and receipt layer for execution requests, compatible with agent frameworks but not defined by any single vendor. Implementation: protocol, docs, products, Verify; start from runtime docs for runtime integration.
// Related
Cluster, pre-execution, and industries
// Hub
Category pillar
Return to the cluster hub: AI execution governance.