Guard0
Back to blog
11 min readGuard0 Team

Shadow Agents: Finding the AI You Don't Know About

Shadow AI agents are proliferating across enterprises. Learn how to discover them, assess their risks, and bring them under governance.

#Shadow AI#Discovery#Governance#Asset Management#Compliance
Shadow Agents: Finding the AI You Don't Know About

Here's a question I ask every CISO I meet: "How many AI agents are running in your organization?"

The answers follow a pattern:

  • "We have about 15 official agents"
  • "Maybe 20 if you count the pilot projects"
  • "I think around 30"

Then I ask: "How many do you think are really running?"

Silence. Uncomfortable looks. Then:

  • "I... actually don't know"
  • "Probably a lot more than I'm aware of"
  • "That's what keeps me up at night"

Welcome to the shadow agent problem.

According to recent surveys, 88% of organizations have experienced security incidents related to unauthorized AI deployments.

Every team that has access to an LLM API or an agent-building platform can deploy an agent. And many are—without security review, without governance, without IT even knowing. These shadow agents access your data, make decisions, and take actions outside your security model.

In this article, I'll walk through how shadow agents appear, why they're dangerous, how to find them, and what to do when you do.

* * *

How Shadow Agents Appear

Where Shadow Agents Come From
Dev TeamsBusiness UnitsThird-PartyChatGPT WrapsShadow AgentsYour Network

Shadow agents emerge through several paths:

Developer Experimentation

Developers are natural tinkerers. When AI coding assistants started improving productivity, developers started building:

  • Personal productivity agents: "I built an agent that reads my emails and summarizes action items"
  • Development helpers: "We have an agent that reviews PRs before humans see them"
  • Automation: "The team built an agent that creates Jira tickets from Slack messages"

These often start as experiments, prove useful, and quietly become production dependencies.

Departmental Initiatives

Business teams are adopting AI faster than IT can support:

  • Sales: "We built an agent that drafts follow-up emails from CRM data"
  • Marketing: "Our agent generates content from product specs"
  • HR: "We have an agent that screens resumes"
  • Legal: "The team built a contract review agent"

These teams have budget, business need, and platforms (Copilot Studio, AgentForce) that don't require IT involvement.

Third-Party Integrations

Vendors are embedding AI agents in their products:

  • Your CRM added an AI assistant
  • Your support platform has chatbot capabilities
  • Your analytics tool includes an AI analyst
  • Your productivity suite has embedded copilots

Did security review each one? Probably not.

ChatGPT Wrappers

The simplest shadow agents are unofficial ChatGPT integrations:

  • Browser extensions that send page content to ChatGPT
  • Slack bots that forward messages to GPT
  • Email plugins that draft responses using LLMs
  • Spreadsheet add-ons with AI analysis

These might use corporate credentials or handle corporate data—with no visibility.

* * *

Why Shadow Agents Are Dangerous

Shadow agents aren't inherently malicious—but they create significant risk:

Data Exposure

Shadow agents access data without controls:

ScenarioRisk
Agent reads customer emailsPII sent to third-party LLM
Agent accesses financial dataConfidential data in training logs
Agent queries production databaseSensitive data in prompts/responses
Agent uploads documents for analysisIntellectual property exposure

Without governance, data flows to wherever the agent sends it.

Compliance Violations

Regulatory requirements assume you control data processing:

  • GDPR: Data processing without legal basis
  • HIPAA: PHI flowing through unauthorized systems
  • SOC 2: Processing outside documented controls
  • Industry regulations: Sector-specific requirements unmet

Shadow agents create undocumented data processing that auditors will question.

Security Blind Spots

Your security stack doesn't see shadow agents:

  • No authentication monitoring
  • No authorization controls
  • No behavioral baseline
  • No incident detection
  • No audit trail

If a shadow agent is compromised, you won't know until the damage is done.

Attack Surface Expansion

Every shadow agent is a potential attack vector:

  • Prompt injection vulnerabilities
  • Credential exposure
  • Misconfigured permissions
  • No security testing
  • No vulnerability management

Attackers look for the weakest entry point. Shadow agents often qualify.

The OpenClaw security crisis illustrates what happens when unmanaged agent instances proliferate — 135,000+ exposed instances were discovered, many deployed without organizational awareness.

Accountability Gap

When something goes wrong, who's responsible?

  • The employee who deployed it?
  • The team that used it?
  • The vendor that provided it?
  • IT for not preventing it?

Shadow agents exist in a governance vacuum.

* * *

Finding Shadow Agents

Shadow Agent Discovery Methods
Discovery MethodsNetwork AnalysisDNS MonitoringAPI Gateway LogsCloud InventoryService ScanningIAM AuditEndpoint/SurveyAgent ProcessesTeam Interviews

The following sequence shows how a typical shadow agent discovery flow works, with multiple signals converging to identify and confirm an unauthorized agent.

SHADOW AGENT DISCOVERY FLOW
Security TeamNetwork MonitorCloud Scannerg0 CLIEnable AI API monitoring1Anomalous traffic to api.openai.com detected2Scan cloud accounts for AI services3Unauthorized Bedrock endpoint found4Scan discovered agent repo512 findings: 3 critical, 4 high6Classify risk and determine disposition7

Discovering shadow agents requires multiple approaches:

Network Traffic Analysis

AI agents call external APIs. Look for:

Direct LLM API calls:

  • api.openai.com
  • api.anthropic.com
  • api.cohere.com
  • api.mistral.ai

Agent platform traffic:

  • Agent builder endpoints
  • MCP server connections
  • Tool/function calling patterns

Common AI agent API domains and MCP server endpoints:

  • Scan for traffic to known MCP server ports and endpoints
  • Monitor DNS queries for common AI agent API domains (e.g., api.openai.com, api.anthropic.com, api.cohere.com)
  • Detect MCP server discovery protocols and tool listing requests on your network

Analysis approach:

DNS logs: Queries to known LLM domains
Network flow: Connections to AI service IPs
Proxy logs: HTTPS traffic to AI endpoints

Caution: Some legitimate services use these APIs. You're looking for unexpected calls.

API Gateway Monitoring

If traffic goes through API gateways:

  • Look for AI API destinations in routing logs
  • Check for unusual authentication patterns
  • Monitor for high-volume requests
  • Track new API integrations

Cloud Service Inventory

Check your cloud environments:

AWS:

  • Bedrock usage
  • SageMaker endpoints
  • Lambda functions calling AI APIs
  • Secrets containing AI API keys

Azure:

  • Azure OpenAI deployments
  • Copilot Studio environments
  • Function apps with AI dependencies

GCP:

  • Vertex AI usage
  • Cloud Functions calling AI
  • API keys for AI services

SaaS Inventory

Review your SaaS applications:

  • Which have added AI features?
  • Which are sending data to external AI?
  • Which have embedded agents?
  • What permissions have users granted?

CASB tools can help identify AI-related SaaS traffic.

Code Repository Scanning

Search codebases for:

Import patterns:

import openai
import anthropic
from langchain import ...
from crewai import ...
from autogen import ...

Configuration patterns:

OPENAI_API_KEY
ANTHROPIC_API_KEY
agent_config
llm_model

Common file names:

agent.py
copilot.py
assistant.py
*_agent.*

Employee Surveys

Sometimes the simplest approach works:

  • Survey teams about AI tool usage
  • Ask about productivity tools
  • Inquire about automation
  • Create safe harbor for disclosure

People often share if asked without blame.

Expense Analysis

Follow the money:

  • AI API charges on expense reports
  • SaaS subscriptions with AI features
  • Credit card payments to AI vendors
  • Department budgets for AI tools

Financial traces reveal shadow deployments.

* * *

Assessing Shadow Agents

Once discovered, assess each shadow agent:

Data Access Assessment

Questions to answer:

  • What data does it access?
  • How sensitive is that data?
  • Where does data flow?
  • Is data stored or logged externally?

Capability Assessment

Questions to answer:

  • What actions can the agent take?
  • Are any actions high-risk (email, database write, etc.)?
  • How autonomous is it?
  • What tools/integrations does it have?

Security Assessment

Questions to answer:

  • Has it been tested for prompt injection?
  • How are credentials managed?
  • Is there any monitoring?
  • Who has access to configure it?

Compliance Assessment

Questions to answer:

  • Does it process regulated data?
  • Is processing documented?
  • Can it meet audit requirements?
  • Are required controls in place?

Risk Classification

Based on assessment, classify risk:

Risk LevelCharacteristics
CriticalAccesses restricted data, takes irreversible actions, no controls
HighAccesses confidential data, has significant capabilities
MediumAccesses internal data, limited actions, some controls
LowMinimal data access, no significant actions, basic controls
* * *

From Shadow to Sanctioned

Discovered shadow agents need disposition:

Option 1: Sanction

Bring the agent under governance:

  1. Register: Add to agent inventory
  2. Assess: Full security assessment
  3. Remediate: Address security gaps
  4. Document: Owner, purpose, data flows
  5. Monitor: Add to security monitoring
  6. Govern: Include in review cycles

This is appropriate when the agent serves a legitimate need and can be secured.

Option 2: Replace

Replace with a sanctioned alternative:

  1. Understand needs: What is the agent doing?
  2. Identify alternatives: Approved tools that meet needs
  3. Migrate: Move users to sanctioned solution
  4. Retire: Decommission shadow agent
  5. Prevent: Block the shadow path

This is appropriate when a better alternative exists.

Option 3: Retire

Shut down without replacement:

  1. Assess impact: Who depends on it?
  2. Communicate: Notify users with lead time
  3. Decommission: Disable agent
  4. Revoke: Remove all access
  5. Monitor: Ensure it stays off

This is appropriate for agents that shouldn't exist.

Option 4: Remediate Urgently

For critical-risk shadow agents:

  1. Immediate disable: Stop the agent now
  2. Investigate: Has any harm occurred?
  3. Contain: Limit any damage
  4. Remediate: Address vulnerabilities
  5. Decide: Sanction, replace, or retire
* * *

Preventing Future Shadow Agents

Discovery is reactive. Prevention is better:

Create Sanctioned Paths

Make it easy to do the right thing:

  • Provide approved agent-building platforms
  • Offer API access through governed channels
  • Create templates for common agent patterns
  • Staff an AI enablement team

If building sanctioned agents is easier than shadow agents, people will use sanctioned paths.

Technical Controls

Make shadow agents harder:

  • Block unauthorized AI API endpoints
  • Require approval for AI SaaS
  • Monitor for AI-related traffic
  • Implement DLP for AI data flows

Technical controls reduce casual shadow deployment.

Policy and Training

Set expectations:

  • AI acceptable use policy
  • Agent deployment requirements
  • Training on risks and proper channels
  • Consequences for violations (measured)

People follow rules when they understand why.

Continuous Discovery

Keep looking:

  • Regular shadow agent scans
  • Ongoing network monitoring
  • Periodic surveys
  • Financial review

Shadow agents will keep appearing. Discovery must be continuous.

* * *
> See Guard0 in action

Key Takeaways

  1. Shadow agents are everywhere: Most organizations have more than they know

  2. Discovery requires multiple approaches: Network, cloud, code, surveys, expenses

  3. Assessment determines disposition: Sanction, replace, or retire

  4. Prevention beats discovery: Make the right path easy

  5. This is ongoing: Shadow agents will keep appearing

* * *

Learn More

* * *

Discover Your Shadow Agents

Guard0's Scout agent continuously discovers AI agents across your enterprise—even the ones you don't know about.

Join the Beta → Get Early Access

Or book a demo to discuss your security requirements.

* * *

Join the AI Security Community

Connect with practitioners tackling shadow AI challenges:

* * *

References

  1. MITRE ATLAS, "AML.T0035 ML Artifact Collection"
  2. MITRE ATLAS, "AML.T0000 ML Asset Discovery"
  3. NIST Cybersecurity Framework, "ID.AM Asset Management"
* * *

Shadow agent discovery methods evolve as platforms change. Last updated: February 2026.

G0
Guard0 Team
Building the future of AI security at Guard0

Choose Your Path

Developers

Try g0 on your codebase

Learn more about g0 →
Self-Serve

Start free on Cloud

Dashboards, AI triage, compliance tracking. Free for up to 5 projects.

Start Free →
Enterprise

Governance at scale

SSO, RBAC, CI/CD gates, self-hosted deployment, SOC2 compliance.

> Get weekly AI security insights

Get AI security insights, threat intelligence, and product updates. Unsubscribe anytime.