Hacker Summer Camp Special: Get 40% OFF with code HACKERCAMP40. Limited time only!

Live Online

4
Live sessions
3
Hours per Session
52
Weeks access
6
Lab Exercises

AppSecEngineer™ Certified AI Security Engineer

2 Certification Exam Attempts
All Recordings of Live sessions
Unlimited access to all 2000+ labs and 500+ courses on AppSecEngineer platform for 1 year
Ideal for
AI Engineer
Developer
Security Engineer
DevSecOps Engineers
Cloud Engineer
Coming Soon
Sign up now

Before this bootcamp

  • AI and LLM apps ship without clear security checks.
  • Security is treated as a separate, later stage of development, not as an integrated part of the engineering process.
  • An individual's understanding of threats is general, lacking the nuanced knowledge of attacks like prompt injection and data poisoning.
  • Without a foundation in AI, it's difficult to understand the architecture and functionality of LLMs and how they are leveraged in applications like Retrieval Augmented Generation (RAG).

After this bootcamp

  • Participants can understand and analyze attacks delivered via user or system prompts, including direct and indirect prompt injection attacks.
  • They can apply defense strategies to neutralize injected commands and filter suspicious phrases in prompts.
  • Individuals can identify and simulate various poisoning attacks, such as Knowledge Base Poisoning, Reasoning/Chain-of-Thought (COT) Poisoning, and Memory Poisoning.
  • Participants will know how to secure advanced AI components like Model Context Protocol (MCP) servers and Reinforcement Learning (RL) agents against attacks like policy poisoning.

Know your Trainer

Haricharana S

I’m Haricharana S—focused on AI, machine learning, and how they can be applied to solve real problems. I’ve worked on applied research projects and assistantships at places like IIT Kharagpur and Georgia Tech, where I explored everything from deep learning systems to practical implementations. Lately, I’ve been diving into application security and how AI can push that space forward. When I’m not buried in research papers or experimenting with models, you’ll find me reading up on contemporary history or writing the occasional poem.

Trained at

Day 1

September 19, 2025
10am to 1pm ET

Foundations of Artificial Intelligence

3 hour live online session

Main Takeaways
  •  provides foundational knowledge in AI, covering its definition and core concepts.
  • gaining an understanding of Large Language Models (LLMs), including their architecture, functionality, capabilities, and limitations.
  • introducing Retrieval Augmented Generation (RAG) and explains how it enhances AI applications by improving context and accuracy.
Skills Gained
  • Defining AI and understanding its key concepts.
  • Explaining the architecture and functionality of LLMs.
  • Applying RAG for enhanced AI applications.

Day 2

September 26, 2025
10am to 1pm ET

: OWASP TOP 10 for LLMS Part 1: Attacking AI

3 hour live online session

Main Takeaways
  • focus on the OWASP Top 10 for LLMs, specifically on prompt injection and tool manipulation attacks.
  • distinguishing between direct prompt injection attacks, which are delivered via user or system prompts, and indirect prompt injection attacks, which come from untrusted external content.
  • covering various direct attack techniques like naive command injections, context ignoring, and fake completion attacks.
  • introducing defense strategies against prompt injection, such as prompt sanitization, segregated context channels, and prompt validation.
Skills Gained
  • Analyzing and identifying direct prompt injection attacks.
  • Demonstrating indirect prompt injection attacks through external tool outputs or content embedding.
  • Simulating multi-agent relay attacks where prompt injections propagate between LLM agents.
  • Evaluating and implementing defense mechanisms like prompt sanitizers and validation tools.

Day 3

October 3, 2025
10am to 1pm ET

OWASP TOP 10 FOR LLMS Part 2: Poisoning Attacks

3 hour live online session

Main Takeaways
  • exploring various poisoning attacks, including Knowledge Base Poisoning, where malicious documents are injected into the knowledge base of RAG systems.
  • It covers Reasoning/Chain-of-Thought (CoT) Poisoning, which can cause an LLM to "overthink," leading to bloated output or resource exhaustion.
  • learning about Memory Poisoning attacks that exploit the persistent memory of LLM agents to hijack their behavior over time.
  • providing an overview of defense strategies, such as data validation, robust retrieval, chain tracing, and memory integrity monitoring.
Skills Gained
  • Corrupting the knowledge base of RAG and agentic LLM systems.
  • Poisoning multistep reasoning to cause misdirection or resource exhaustion.
  • Exploiting persistent memory in LLM agents to alter their behavior.
  • Applying defenses against data, reasoning, and memory poisoning attacks.

Day 4

October 10, 2025
10am to 1pm ET

Advanced AI Security: MCP Servers and RL Agents

3 hour live online session

Main Takeaways
  • delving into advanced AI security topics, beginning with securing Model Context Protocol (MCP) servers.
  • covering common attack vectors targeting MCP servers and outlines defense strategies to secure their implementations.
  • providing an introduction to Reinforcement Learning (RL) and its components.
  • explaining Policy Poisoning attacks in RL, providing concepts and examples of how they work.
Skills Gained
  • Identifying common attack vectors against MCP servers.
  • Implementing defense strategies to secure MCP server implementations.
  • Understanding the basic concepts of Reinforcement Learning (RL) and RL agents.
  • Conducting a simple lab focusing on RL policy poisoning.

Yes, you get certified… And it’s not just for show

  • 2 exam attempts included with every bootcamp
  • Certificate + CPE credits (1 per hour of training)
  • Hands-on, project-based exam
  • Evaluator-reviewed within 24-48 hours
  • Certificate issued within 24 hours if you pass
You’ll submit a real project that shows what you’ve learned and proves you can apply it in the real world.

Technical Prerequisites

Required:

  • Ability to read and write Python code confidently
  • Laptop/Desktop/Any device capable of running Python environments, APIs, and LLM tools

Recommended:

  • Prior exposure to Large Language Models (LLMs) and their applications
  • Familiarity with tools such as Ollama and LangChain
  • Basic understanding of working with APIs, especially those that enable LLM workflows

Helpful:

  • Experience building small projects or workflows that integrate AI/LLMs
  • Awareness of common challenges in LLM usage (hallucinations, prompt injection, bias)
  • Knowledge of software development workflows and using IDEs or Jupyter notebooks

Certification Exam Time Commitment

Estimated effort:
5
hours
Time limit:
48
hours from the time you begin

Everything that comes with your bootcamp seat

AppSecEngineer Pro Plus Plan
Free access to the full Pro Plus AppSecEngineer subscription: for a whole year: courses, learning paths, challenges, and all cloud sandboxes included.
GenAI sandbox access
Get hands-on with LLMs in our secure AI playground. No ChatGPT+ account needed.
Certificate & CPE credits
Finish the bootcamp and earn a certificate you can use for career bragging rights and ISC2 CPE credits (1 credit per hour of training). You’ll also get two attempts at the certification exam if you want a second shot or just like acing things twice.
Live bootcamp access
Join live virtual sessions led by trainers who’ve seen real-world incidents and built secure systems. Ask questions, solve problems, and stay sharp.
One-year replay access
Can’t make it live? No stress. You’ll get full access to the session recordings and labs for one year.
Private support channel
Join your own Discord channel with the trainer and bootcamp peers. Ask questions and get answers for 60 days after your bootcamp begins.

Sign up. Show up. Skill up.

AppSecEngineer™ Certified AI Security Engineer
1999
Sign up now
Coming Soon
Sign up now

I found these courses to be pretty comprehensive and practically oriented. From dissecting common threat vectors to writing abuser stories, it had a lot of useful takeaways by the end.

Developer at SaaS Company

API security used to be a headache for me, but these courses changed that. There’s always a lab waiting for you at the end of a lesson, so it really helps reinforce the concepts in a practical way. This is great stuff.

Senior Developer at Leading Software Consulting Giant

The hands-on labs made all the difference. I went from experimenting with LLM APIs to actually building secure, working workflows in just two days.

Priya Nair, Machine Learning Engineer

Exactly what I needed to upskill my team. The sessions cut through the noise and showed us how to work with LLMs safely and effectively in real-world projects.

Sarah Martinez, Engineering Manager

Clear, practical, and instantly applicable. I left with working code samples and the confidence to start building with LLM APIs right away.

Ahmed Al-Farsi, Senior Developer

I’ve attended plenty of AI workshops, but this one stood out. You don’t just learn concepts — you actually implement LangChain and Ollama integrations step by step.

David Kim, Software Engineer

FAQs

Can't attend this bootcamp?

Get informed about future bootcamps!

Ready to Elevate Your Security Training?

Empower your teams with the skills they need to secure your applications and stay ahead of the curve.
Get Our Newsletter
Get Started
X
X
Copyright AppSecEngineer © 2025