Instructor Led Training

AI Agent Security Masterclass

Every prompt, plugin, and tool is an attack surface. This training gives you the skills to control them before they become a liability.

Course Overview

2 days
100% Hands-On
Includes Labs, MCP, and RAG Security
Ideal for: AppSec / DevSecOps Engineers

You’re deploying AI agents to ship faster, automate more, and cut manual work. But these agents, they take action. They call tools, trigger changes, and make decisions on your behalf.

And that creates new paths to get breached.

In this hands-on course, you’ll attack real agents, exploiting prompt injection, tool abuse, RAG poisoning, and plugin takeovers. Then you’ll fix them with solid and tested defenses: sandboxing, least privilege, input controls, and the one thing no one’s doing right: MCP security.

Know your trainer

Abhay Bhargav

CHief RESEARCH OFFICER, AppSecEngineer
Abhay started his career as a breaker of apps, in pentesting and red-teaming, but today is more involved in scaling AppSec with Cloud-Native Security and DevSecOps.

He has created some pioneering works in the area of DevSecOps and AppSec Automation, including the world’s first hands-on training program on DevSecOps, focused on Application Security Automation. In addition to this, Abhay is active in his research of new technologies and their impact on Application Security, specifically Cloud-Native Security. In addition, Abhay has contributed to pioneering work in the Vulnerability Management space, being the architect of a leading Vulnerability Management and Correlation Product, Orchestron.

Abhay is also committed to Open-Source and has developed the first-ever Threat Modeling solution at the crossroads of Agile and DevSecOps, called ThreatPlaybook.Abhay is a speaker and trainer at major industry events including DEF CON, BlackHat, OWASP AppSecUSA, EU and AppSecCali. His training programs have been sold-out events at conferences like AppSecUSA, EU, AppSecDay Melbourne, CodeBlue (Japan), BlackHat USA, SHACK and so on. He's authored two international publications on Java Security and PCI Compliance as well.
Read More
Abhay started his career as a breaker of apps, in pentesting and red-teaming, but today is more involved in scaling AppSec with Cloud-Native Security and DevSecOps.

He has created some pioneering works in the area of DevSecOps and AppSec Automation, including the world’s first hands-on training program on DevSecOps, focused on Application Security Automation. In addition to this, Abhay is active in his research of new technologies and their impact on Application Security, specifically Cloud-Native Security. In addition, Abhay has contributed to pioneering work in the Vulnerability Management space, being the architect of a leading Vulnerability Management and Correlation Product, Orchestron.

Abhay is also committed to Open-Source and has developed the first-ever Threat Modeling solution at the crossroads of Agile and DevSecOps, called ThreatPlaybook.Abhay is a speaker and trainer at major industry events including DEF CON, BlackHat, OWASP AppSecUSA, EU and AppSecCali. His training programs have been sold-out events at conferences like AppSecUSA, EU, AppSecDay Melbourne, CodeBlue (Japan), BlackHat USA, SHACK and so on. He's authored two international publications on Java Security and PCI Compliance as well.
Read less

Big Wins For Your Enterprise

Stop prompt injection, tool abuse, and RAG poisoning before they hit production.

Control what agents can see and do with sandboxing, least privilege, and strong boundaries.

Secure every plugin and tool your agents call using the Model Context Protocol (MCP).

Threat model real AI systems and bake those controls into your SDLC.

Train your AppSec and DevSecOps teams hands-on, with live labs mapped to real risks.

What Your Team Will Learn

Build secure AI agents that don’t crumble under prompt injection or tool abuse.

Threat model LLM workflows with precision using STRIDE + MAESTRO tailored for agents and plugins.

Stop RAG poisoning and hallucinations before they hit production environments.

Control what agents can access and execute with real-world enforcement using MCP and sandboxing.

Hands-On Defense for Hands-Off AI Systems

Train in live, cloud-based labs where you build, break, and secure real LLM agents using CrewAI, LangChain, and Python.

Attack and defend AI workflows modeled after real risks like prompt injection, RAG poisoning, and plugin abuse.

Skip the setup. Everything’s ready to go with secure sandboxes, open-source tools, and zero infrastructure overhead.

Make mistakes safely, and learn faster by testing defenses in a controlled, risk-free environment.

Explore Hands on Labs

Prerequisites

Knowledge base

  • Foundational understanding of application security principles and DevSecOps processes.

  • Familiarity with threat modeling concepts, common vulnerability types (e.g., OWASP Top 10 for Web), and security testing (SAST/DAST/SCA) is beneficial.

  • Basic knowledge of Python programming or scripting is recommended as labs involve reading/writing simple Python code for AI API/framework interaction.

  • No prior machine learning or deep AI expertise is required. Core AI/LLM concepts relevant to agents will be introduced.

  • An eagerness to experiment, a builder's mindset, and an interest in both offensive and defensive security are key.

What Students Should Bring

  • A laptop with a modern web browser and reliable internet connectivity.

  • All participants will receive access to a cloud-based lab environment with all required tools, LLMs, and agent frameworks. No special hardware or local software installations are needed.

Talk to us

Testimonials

I found these courses to be pretty comprehensive and practically oriented. From dissecting common threat vectors to writing abuser stories, it had a lot of useful takeaways by the end.

DevOps Engineer at Streaming Services Provider

WORLD'S LARGEST SPORTS EQUIPMENT MANUFACTURER
Threat modeling has always been a bit elusive for my team, but these courses made it all click. The step-by-step breakdown of threat modeling concepts and integrating them into a DevSecOps pipeline gave us some solid, actionable learnings.

Developer at SaaS Company

DEFENSE INDUSTRY
“Threat modeling is seriously underrated compared to other security activities that have more visible impact. Fact of the matter is, if you can anticipate and build around potential threats to your software, that’s going to make a much bigger difference than if you set up a million defenses after the fact. These courses taught me how to do that!”

Head of Product at International Logistics Corporation

CYBERSECURITY OPERATIONS CENTER (CSOC)

Ready to Elevate Your Security Training?

Empower your teams with the skills they need to secure your applications and stay ahead of the curve.
Get Started Now
X
X
Copyright AppSecEngineer © 2025
Upcoming Bootcamp: Rapid Threat Modeling with GenAI and LLM  | 24 - 25 July | Book your seat now