Skip to content
Guard0
Research
Framework

The Three Questions

An accountability framework for AI agents.

We ask every CISO three questions. Nobody has answered the first one yet. This framework provides the methodology, maturity model, and self-assessment to change that.

The Premise

AI agents are the fastest-growing workforce in the enterprise. By mid-2026, 50% of enterprise working hours will be reshaped by AI agents. The agentic AI governance market is projected to reach $38.9B by 2030.

Yet most organizations cannot answer a simple question: how many agents do we have?

The Three Questions framework reduces AI agent accountability to its irreducible core. If you can answer all three — with evidence, not assumptions — you have accountability. If you can't, you have risk.

“Intelligence may be scalable, but accountability is not.”

— Accenture & Wharton, “The Age of Co-Intelligence” (2026)
01

What agents do we have?

Discovery & Inventory

Why It Matters

You can't be accountable for what you can't see. Most organizations have 3-10x more agents than they think — shadow agents deployed by individual teams, embedded in SaaS platforms, or spun up in experiments that never got decommissioned.

43% of enterprise AI agents are unknown to the security team.

Guard0 research, 2.4M+ agents scanned

What Good Looks Like

  • Complete, continuously updated inventory of all AI agents
  • Agents classified by framework, owner, deployment environment, and risk level
  • Shadow agent detection across code repos, cloud deployments, and SaaS platforms
  • Automated discovery — not relying on self-reporting or manual audits

Maturity Model

Level 0 — Unknown

No centralized view of AI agents. Teams deploy independently. No one knows how many agents exist.

Level 1 — Manual

Spreadsheet-based inventory. Relies on team self-reporting. Updated quarterly at best. Always incomplete.

Level 2 — Scanned

Automated scanning across known environments. Catches most agents but may miss shadow deployments in new platforms.

Level 3 — Continuous

Real-time, agentless discovery across all environments. Shadow agents detected within hours. Full AI Asset Graph maintained automatically.

02

What can they access?

Access Mapping & Blast Radius

Why It Matters

A single agent can chain through tools, databases, APIs, and external services — creating a blast radius that no one mapped. When that agent is compromised, the blast radius determines the damage. Most agents have far more access than they need.

The average enterprise AI agent has access to 8.3 data sources. Most need 2.

Guard0 platform data

What Good Looks Like

  • Complete access graph for every agent: tools, data sources, APIs, MCP servers, external services
  • Blast radius analysis — what's the worst-case impact if this agent is compromised?
  • Least-privilege enforcement — agents only access what they demonstrably need
  • Permission gap analysis — the delta between what agents have and what they should have

Maturity Model

Level 0 — Blind

No visibility into what agents can access. Permissions granted ad-hoc by developers. No blast radius concept.

Level 1 — Documented

Access documented at deployment time but never updated. Drift accumulates. Documentation ages quickly.

Level 2 — Mapped

Automated access mapping shows current permissions. Blast radius calculated but not enforced. Alerts on over-permissioned agents.

Level 3 — Enforced

Real-time access graph with automatic least-privilege recommendations. Blast radius monitoring. Permission anomalies trigger alerts.

03

Is behavior aligned?

Runtime Governance & Enforcement

Why It Matters

An agent can have correct permissions and still behave dangerously — exfiltrating data within its access scope, drifting from its intended behavior pattern, or being hijacked via prompt injection. The question isn't just what agents can do. It's what they are doing.

88% of organizations have already experienced an AI agent security incident.

Industry survey data, 2026

What Good Looks Like

  • Behavioral baselines established for every agent — what "normal" looks like
  • Real-time anomaly detection when agents deviate from baselines
  • Policy enforcement — automated responses to drift, not just alerts
  • Kill switch — ability to immediately isolate a compromised or rogue agent

Maturity Model

Level 0 — Trust

Agents run without monitoring. Behavior is assumed correct because the prompt says so. No anomaly detection.

Level 1 — Logged

Agent actions are logged but not analyzed in real-time. Post-incident forensics only. No behavioral baselines.

Level 2 — Monitored

Behavioral baselines established. Anomalies generate alerts. Human-in-the-loop for response. No automated enforcement.

Level 3 — Governed

Real-time behavioral monitoring with automated policy enforcement. Kill switch available. Drift triggers immediate containment. Humans in lead, not in loop.

Self-Assessment

Answer honestly. If you hesitate on any of these, you have a gap.

Can you produce a complete list of all AI agents in your organization within 24 hours?

Maps to: Q1

Do you know every tool, database, and API each agent can access?

Maps to: Q1 + Q2

Can you calculate the blast radius if your most privileged agent is compromised?

Maps to: Q2

Do your agents have least-privilege access, or just "whatever the developer gave them"?

Maps to: Q2

Do you have behavioral baselines for your agents? Would you know if one drifted?

Maps to: Q3

Can you kill a rogue agent within 60 seconds? Do you have a kill switch?

Maps to: Q3

Could you satisfy an EU AI Act audit of your agent infrastructure today?

Maps to: Q1 + Q2 + Q3

Adopting the Framework

The Three Questions framework is designed to be adopted whether or not you use Guard0. It maps to existing standards — OWASP Agentic Top 10, NIST AI RMF, EU AI Act Article 14, ISO 42001 — and provides a simple, repeatable structure for evaluating your AI agent accountability posture.

For organizations using Guard0, each question maps directly to platform capabilities:

Answer all three.In fifteen minutes.

Agentless. Read-only by default. SaaS, Private SaaS, or On-Prem.