← All Events
trainingintermediate
Framework Security: LangChain, CrewAI & LangGraph
Security assessment of popular agent frameworks. Learn vulnerabilities specific to LangChain, CrewAI, and LangGraph deployments.
Date & Time
Thursday, June 18, 2026
10:00 AM - 1:00 PM PST
Location
virtual
Price
Free
Capacity
35 seats
LangChain SecurityCrewAI VulnerabilitiesLangGraph AttacksFramework Hardening
// Speakers
G
Guard0 Security Team
Security Researchers
Training Overview
Most AI agents are built on popular frameworks. Each framework has unique security characteristics. This training covers security assessment and hardening for the top three.
LangChain Security
- Chain vulnerability patterns
- Memory security issues
- Tool/agent security
- LCEL-specific attacks
- Secure patterns
CrewAI Security
- Multi-agent trust issues
- Task injection attacks
- Role manipulation
- Crew orchestration vulnerabilities
- Secure crew design
LangGraph Security
- Graph-based attack paths
- State manipulation
- Node injection
- Edge condition exploits
- Secure graph patterns
Hands-on Labs
- Lab 1: LangChain penetration testing
- Lab 2: CrewAI crew exploitation
- Lab 3: LangGraph state attacks
Prerequisites
- Familiarity with at least one framework
- Python proficiency
- Basic agent security knowledge
Materials
- Framework security checklists
- Vulnerability databases
- Secure implementation guides