magesh.ai agent v1.0 (views are my own) · kill-chain resources about
viewing: governance_risk · 00:00:00
← agent.navigate: resources / governance & risk
25 min read · 8 frameworks · 6 governance gaps · 15 references

Agentic AI Risk for Security Leaders

Every major governance framework was designed for the chatbot era. Agent-specific coverage is either absent, bolt-on, or less than three months old. Your security team is evaluating agent deployments with the wrong risk model. Here's what to use instead.

category:
Governance & Risk · security-leaders
TECHNICAL FOUNDATION This article is the governance summary of the Agentic AI Kill Chain → read the full threat model

The Numbers

Data points for your next board presentation.

79%
of enterprises have blind spots where agents invoke tools or touch data
Akto, State of Agentic AI Security 2025
71%
say AI tools access core systems — only 16% govern that access
Cybersecurity Insiders & Saviynt, 2026
49%
of leaders say boards don't fully understand AI risks
Fortinet, 2025 Skills Gap Report

Eight Frameworks

What each covers for agents — and where each stops.

FrameworkAgent CoverageStatus
NIST AI 600-1None — designed for chatbot/GenAI eraPublished Jul 2024
NIST CAISI Agent StandardsFirst agent-specific initiative. SP 800-53 overlay approach.Info-gathering, no standard yet
Google SAIF 2.0 / CoSAIAgent risk map addedAvailable
EU AI Act Article 15Technology-neutral — covers agents implicitlyObligations Aug 2, 2026
ISO/IEC 42001None — management system, not technicalCertifiable now
CSA AI Controls Matrix243 controls. "Agentic Control Plane" concept launched Mar 2026.Available + recent agent work
OpenAI Governance Practices7 practices purpose-built for agentsPrinciples only, no controls
Singapore MGF Agentic AIWorld's first agent-specific governance framework (4 dimensions)Published Jan 2026, voluntary
The gap: Gartner projects 40% enterprise application penetration for AI agents by end of 2026. NIST agent-specific standards will not be finalized until 2027 at earliest. Most organizations are deploying agents faster than governance can keep up.

Six Gaps No Framework Fully Addresses

01
Multi-agent delegation accountability

When Agent A delegates to Agent B which calls Tool C — who is accountable for the outcome? No framework defines accountability chains for multi-agent delegation. The confused deputy problem (Kill Chain Stage 4) has no governance equivalent.

02
Tool access governance at scale

71% of organizations say AI tools access core systems (Salesforce, SAP), but only 16% govern that access effectively (Cybersecurity Insiders & Saviynt, 2026). Agents need IAM-like governance — the CSA calls this the "agentic control plane" — but no standard defines how to implement it.

03
Cross-session memory poisoning

Persistent compromise across conversations — an attacker poisons the agent's memory once, and every future session follows the attacker's instructions. No governance framework addresses memory integrity verification. See Kill Chain Stage 6.

04
Behavioral drift detection

Gradual shift in agent behavior over time that looks like legitimate evolution but is actually compromise. Requires behavioral baselines — but no framework mandates them. Microsoft elevated AI observability to a security requirement in March 2026, but observability alone doesn't solve detection.

05
MCP protocol-level security

Tool schema poisoning, rug pulls, cross-server exfiltration — attack vectors at the protocol layer between agents and tools. See MCP Security for 6 documented attack vectors. No governance framework addresses tool protocol security.

06
Agent identity and access management

Agents are digital participants that need identities, permissions, oversight, and accountability — like human users. But IAM systems were designed for humans and service accounts, not autonomous decision-makers that reason about which tools to use.

What to Do Now

Five actions for security leaders
1
Inventory your agent deployments

Which agents exist, what tools they access, what permissions they have, whether they delegate to other agents. If you don't know this, you can't govern it.

2
Apply least privilege to tool access

Remove auto-approve from sensitive operations. Scope tool permissions to the minimum required. This is Kill Chain Stage 4 — the highest-impact, lowest-effort control.

3
Adopt the Singapore four-dimension structure

Assess and bound risks upfront, make humans meaningfully accountable, implement technical controls, enable end-user responsibility. It's the cleanest agentic governance model available.

4
Mandate agent observability

Complete audit trails of all agent interactions — prompts, responses, tool calls, parameters. If Microsoft elevated this to a security requirement, your organization should too. See Behavioral Baselines.

5
Track NIST CAISI

The SP 800-53 AI overlay approach means agent security controls integrate into your existing security program. If your org is NIST-aligned, this will be the standard. Comments close April 2, 2026.

This is the governance layer of the Agentic AI Kill Chain. For technical controls, see Hook Guardrails, MCP Security, and Red Teaming. For detection, see Behavioral Baselines.

References
[1]NIST AI 600-1, "AI Risk Management Framework: Generative AI Profile." (July 2024). nvlpubs.nist.gov/nistpubs/ai/NIST.AI.600-1.pdf
[2]NIST CAISI, "AI Agent Standards Initiative." (February 2026). nist.gov/caisi/ai-agent-standards-initiative
[3]Google, "Secure AI Framework (SAIF)." saif.google
[4]EU AI Act, Article 15 — Accuracy, Robustness, Cybersecurity. artificialintelligenceact.eu/article/15/
[5]ISO/IEC 42001:2023, "AI Management System Standard." iso.org/standard/42001
[6]Cloud Security Alliance, "AI Controls Matrix." cloudsecurityalliance.org/artifacts/ai-controls-matrix
[7]CSA, "Securing the Agentic Control Plane." (March 2026). cloudsecurityalliance.org/blog/2026/03/20/
[8]OpenAI, "Practices for Governing Agentic AI Systems." (December 2023). cdn.openai.com/papers/practices-for-governing-agentic-ai-systems.pdf
[9]IMDA Singapore, "Model AI Governance Framework for Agentic AI." (January 2026). imda.gov.sg
[10]Akto, "State of Agentic AI Security 2025." akto.io/blog/state-of-agentic-ai-security-2025
[11]Fortinet, "2025 Cybersecurity Skills Gap Report." 49% stat on board AI risk awareness.
[12]Cybersecurity Insiders & Saviynt, "2026 CISO AI Risk Report." 71%/16% stat on AI tool access governance.
[13]Splunk/Cisco, "Agentic AI and CISOs 2026." newsroom.cisco.com (February 2026)
[14]Microsoft Security Blog, "Observability for AI Systems: Strengthening Visibility and Proactive Risk Detection." microsoft.com/en-us/security/blog/2026/03/18/ (March 2026)
[15]Dhanasekaran, M. "The Agentic AI Kill Chain." magesh.ai/kill-chain (2026)

This work represents the author's independent research and personal views. It is not related to or endorsed by the author's employer.