Shadow Agents: Finding the AI You Don't Know About
Shadow AI agents are proliferating across enterprises. Learn how to discover them, assess their risks, and bring them under governance.

Here's a question I ask every CISO I meet: "How many AI agents are running in your organization?"
The answers follow a pattern:
- "We have about 15 official agents"
- "Maybe 20 if you count the pilot projects"
- "I think around 30"
Then I ask: "How many do you think are really running?"
Silence. Uncomfortable looks. Then:
- "I... actually don't know"
- "Probably a lot more than I'm aware of"
- "That's what keeps me up at night"
Welcome to the shadow agent problem.
According to recent surveys, 88% of organizations have experienced security incidents related to unauthorized AI deployments.
Every team that has access to an LLM API or an agent-building platform can deploy an agent. And many are—without security review, without governance, without IT even knowing. These shadow agents access your data, make decisions, and take actions outside your security model.
In this article, I'll walk through how shadow agents appear, why they're dangerous, how to find them, and what to do when you do.
How Shadow Agents Appear
Shadow agents emerge through several paths:
Developer Experimentation
Developers are natural tinkerers. When AI coding assistants started improving productivity, developers started building:
- Personal productivity agents: "I built an agent that reads my emails and summarizes action items"
- Development helpers: "We have an agent that reviews PRs before humans see them"
- Automation: "The team built an agent that creates Jira tickets from Slack messages"
These often start as experiments, prove useful, and quietly become production dependencies.
Departmental Initiatives
Business teams are adopting AI faster than IT can support:
- Sales: "We built an agent that drafts follow-up emails from CRM data"
- Marketing: "Our agent generates content from product specs"
- HR: "We have an agent that screens resumes"
- Legal: "The team built a contract review agent"
These teams have budget, business need, and platforms (Copilot Studio, AgentForce) that don't require IT involvement.
Third-Party Integrations
Vendors are embedding AI agents in their products:
- Your CRM added an AI assistant
- Your support platform has chatbot capabilities
- Your analytics tool includes an AI analyst
- Your productivity suite has embedded copilots
Did security review each one? Probably not.
ChatGPT Wrappers
The simplest shadow agents are unofficial ChatGPT integrations:
- Browser extensions that send page content to ChatGPT
- Slack bots that forward messages to GPT
- Email plugins that draft responses using LLMs
- Spreadsheet add-ons with AI analysis
These might use corporate credentials or handle corporate data—with no visibility.
Why Shadow Agents Are Dangerous
Shadow agents aren't inherently malicious—but they create significant risk:
Data Exposure
Shadow agents access data without controls:
| Scenario | Risk |
|---|---|
| Agent reads customer emails | PII sent to third-party LLM |
| Agent accesses financial data | Confidential data in training logs |
| Agent queries production database | Sensitive data in prompts/responses |
| Agent uploads documents for analysis | Intellectual property exposure |
Without governance, data flows to wherever the agent sends it.
Compliance Violations
Regulatory requirements assume you control data processing:
- GDPR: Data processing without legal basis
- HIPAA: PHI flowing through unauthorized systems
- SOC 2: Processing outside documented controls
- Industry regulations: Sector-specific requirements unmet
Shadow agents create undocumented data processing that auditors will question.
Security Blind Spots
Your security stack doesn't see shadow agents:
- No authentication monitoring
- No authorization controls
- No behavioral baseline
- No incident detection
- No audit trail
If a shadow agent is compromised, you won't know until the damage is done.
Attack Surface Expansion
Every shadow agent is a potential attack vector:
- Prompt injection vulnerabilities
- Credential exposure
- Misconfigured permissions
- No security testing
- No vulnerability management
Attackers look for the weakest entry point. Shadow agents often qualify.
The OpenClaw security crisis illustrates what happens when unmanaged agent instances proliferate — 135,000+ exposed instances were discovered, many deployed without organizational awareness.
Accountability Gap
When something goes wrong, who's responsible?
- The employee who deployed it?
- The team that used it?
- The vendor that provided it?
- IT for not preventing it?
Shadow agents exist in a governance vacuum.
Finding Shadow Agents
The following sequence shows how a typical shadow agent discovery flow works, with multiple signals converging to identify and confirm an unauthorized agent.
Discovering shadow agents requires multiple approaches:
Network Traffic Analysis
AI agents call external APIs. Look for:
Direct LLM API calls:
- api.openai.com
- api.anthropic.com
- api.cohere.com
- api.mistral.ai
Agent platform traffic:
- Agent builder endpoints
- MCP server connections
- Tool/function calling patterns
Common AI agent API domains and MCP server endpoints:
- Scan for traffic to known MCP server ports and endpoints
- Monitor DNS queries for common AI agent API domains (e.g., api.openai.com, api.anthropic.com, api.cohere.com)
- Detect MCP server discovery protocols and tool listing requests on your network
Analysis approach:
DNS logs: Queries to known LLM domains
Network flow: Connections to AI service IPs
Proxy logs: HTTPS traffic to AI endpointsCaution: Some legitimate services use these APIs. You're looking for unexpected calls.
API Gateway Monitoring
If traffic goes through API gateways:
- Look for AI API destinations in routing logs
- Check for unusual authentication patterns
- Monitor for high-volume requests
- Track new API integrations
Cloud Service Inventory
Check your cloud environments:
AWS:
- Bedrock usage
- SageMaker endpoints
- Lambda functions calling AI APIs
- Secrets containing AI API keys
Azure:
- Azure OpenAI deployments
- Copilot Studio environments
- Function apps with AI dependencies
GCP:
- Vertex AI usage
- Cloud Functions calling AI
- API keys for AI services
SaaS Inventory
Review your SaaS applications:
- Which have added AI features?
- Which are sending data to external AI?
- Which have embedded agents?
- What permissions have users granted?
CASB tools can help identify AI-related SaaS traffic.
Code Repository Scanning
Search codebases for:
Import patterns:
import openai
import anthropic
from langchain import ...
from crewai import ...
from autogen import ...Configuration patterns:
OPENAI_API_KEY
ANTHROPIC_API_KEY
agent_config
llm_modelCommon file names:
agent.py
copilot.py
assistant.py
*_agent.*Employee Surveys
Sometimes the simplest approach works:
- Survey teams about AI tool usage
- Ask about productivity tools
- Inquire about automation
- Create safe harbor for disclosure
People often share if asked without blame.
Expense Analysis
Follow the money:
- AI API charges on expense reports
- SaaS subscriptions with AI features
- Credit card payments to AI vendors
- Department budgets for AI tools
Financial traces reveal shadow deployments.
Assessing Shadow Agents
Once discovered, assess each shadow agent:
Data Access Assessment
Questions to answer:
- What data does it access?
- How sensitive is that data?
- Where does data flow?
- Is data stored or logged externally?
Capability Assessment
Questions to answer:
- What actions can the agent take?
- Are any actions high-risk (email, database write, etc.)?
- How autonomous is it?
- What tools/integrations does it have?
Security Assessment
Questions to answer:
- Has it been tested for prompt injection?
- How are credentials managed?
- Is there any monitoring?
- Who has access to configure it?
Compliance Assessment
Questions to answer:
- Does it process regulated data?
- Is processing documented?
- Can it meet audit requirements?
- Are required controls in place?
Risk Classification
Based on assessment, classify risk:
| Risk Level | Characteristics |
|---|---|
| Critical | Accesses restricted data, takes irreversible actions, no controls |
| High | Accesses confidential data, has significant capabilities |
| Medium | Accesses internal data, limited actions, some controls |
| Low | Minimal data access, no significant actions, basic controls |
From Shadow to Sanctioned
Discovered shadow agents need disposition:
Option 1: Sanction
Bring the agent under governance:
- Register: Add to agent inventory
- Assess: Full security assessment
- Remediate: Address security gaps
- Document: Owner, purpose, data flows
- Monitor: Add to security monitoring
- Govern: Include in review cycles
This is appropriate when the agent serves a legitimate need and can be secured.
Option 2: Replace
Replace with a sanctioned alternative:
- Understand needs: What is the agent doing?
- Identify alternatives: Approved tools that meet needs
- Migrate: Move users to sanctioned solution
- Retire: Decommission shadow agent
- Prevent: Block the shadow path
This is appropriate when a better alternative exists.
Option 3: Retire
Shut down without replacement:
- Assess impact: Who depends on it?
- Communicate: Notify users with lead time
- Decommission: Disable agent
- Revoke: Remove all access
- Monitor: Ensure it stays off
This is appropriate for agents that shouldn't exist.
Option 4: Remediate Urgently
For critical-risk shadow agents:
- Immediate disable: Stop the agent now
- Investigate: Has any harm occurred?
- Contain: Limit any damage
- Remediate: Address vulnerabilities
- Decide: Sanction, replace, or retire
Preventing Future Shadow Agents
Discovery is reactive. Prevention is better:
Create Sanctioned Paths
Make it easy to do the right thing:
- Provide approved agent-building platforms
- Offer API access through governed channels
- Create templates for common agent patterns
- Staff an AI enablement team
If building sanctioned agents is easier than shadow agents, people will use sanctioned paths.
Technical Controls
Make shadow agents harder:
- Block unauthorized AI API endpoints
- Require approval for AI SaaS
- Monitor for AI-related traffic
- Implement DLP for AI data flows
Technical controls reduce casual shadow deployment.
Policy and Training
Set expectations:
- AI acceptable use policy
- Agent deployment requirements
- Training on risks and proper channels
- Consequences for violations (measured)
People follow rules when they understand why.
Continuous Discovery
Keep looking:
- Regular shadow agent scans
- Ongoing network monitoring
- Periodic surveys
- Financial review
Shadow agents will keep appearing. Discovery must be continuous.
Key Takeaways
-
Shadow agents are everywhere: Most organizations have more than they know
-
Discovery requires multiple approaches: Network, cloud, code, surveys, expenses
-
Assessment determines disposition: Sanction, replace, or retire
-
Prevention beats discovery: Make the right path easy
-
This is ongoing: Shadow agents will keep appearing
Learn More
- The Complete Guide to Agentic AI Security: Comprehensive agent security program
- Guard0: Scout agent for continuous shadow AI discovery
- TrustVector.dev: Evaluate AI systems before deployment
Discover Your Shadow Agents
Guard0's Scout agent continuously discovers AI agents across your enterprise—even the ones you don't know about.
Join the Beta → Get Early Access
Or book a demo to discuss your security requirements.
Join the AI Security Community
Connect with practitioners tackling shadow AI challenges:
- Slack Community - Share shadow agent discovery techniques
- WhatsApp Group - Quick discussions and updates
References
- MITRE ATLAS, "AML.T0035 ML Artifact Collection"
- MITRE ATLAS, "AML.T0000 ML Asset Discovery"
- NIST Cybersecurity Framework, "ID.AM Asset Management"
Shadow agent discovery methods evolve as platforms change. Last updated: February 2026.
Choose Your Path
Start free on Cloud
Dashboards, AI triage, compliance tracking. Free for up to 5 projects.
Start Free →Governance at scale
SSO, RBAC, CI/CD gates, self-hosted deployment, SOC2 compliance.
> Get weekly AI security insights
Get AI security insights, threat intelligence, and product updates. Unsubscribe anytime.