Every prompt, plugin, and tool is an attack surface. This training gives you the skills to control them before they become a liability.
You’re deploying AI agents to ship faster, automate more, and cut manual work. But these agents, they take action. They call tools, trigger changes, and make decisions on your behalf.
And that creates new paths to get breached.
In this hands-on course, you’ll attack real agents, exploiting prompt injection, tool abuse, RAG poisoning, and plugin takeovers. Then you’ll fix them with solid and tested defenses: sandboxing, least privilege, input controls, and the one thing no one’s doing right: MCP security.
Stop prompt injection, tool abuse, and RAG poisoning before they hit production.
Control what agents can see and do with sandboxing, least privilege, and strong boundaries.
Secure every plugin and tool your agents call using the Model Context Protocol (MCP).
Threat model real AI systems and bake those controls into your SDLC.
Train your AppSec and DevSecOps teams hands-on, with live labs mapped to real risks.
Build secure AI agents that don’t crumble under prompt injection or tool abuse.
Threat model LLM workflows with precision using STRIDE + MAESTRO tailored for agents and plugins.
Stop RAG poisoning and hallucinations before they hit production environments.
Control what agents can access and execute with real-world enforcement using MCP and sandboxing.
Train in live, cloud-based labs where you build, break, and secure real LLM agents using CrewAI, LangChain, and Python.
Attack and defend AI workflows modeled after real risks like prompt injection, RAG poisoning, and plugin abuse.
Skip the setup. Everything’s ready to go with secure sandboxes, open-source tools, and zero infrastructure overhead.
Make mistakes safely, and learn faster by testing defenses in a controlled, risk-free environment.
Foundational understanding of application security principles and DevSecOps processes.
Familiarity with threat modeling concepts, common vulnerability types (e.g., OWASP Top 10 for Web), and security testing (SAST/DAST/SCA) is beneficial.
Basic knowledge of Python programming or scripting is recommended as labs involve reading/writing simple Python code for AI API/framework interaction.
No prior machine learning or deep AI expertise is required. Core AI/LLM concepts relevant to agents will be introduced.
An eagerness to experiment, a builder's mindset, and an interest in both offensive and defensive security are key.
A laptop with a modern web browser and reliable internet connectivity.
All participants will receive access to a cloud-based lab environment with all required tools, LLMs, and agent frameworks. No special hardware or local software installations are needed.