Go from months to minutes: Hardcore hands-on training that immerses engineering and security teams into practices around collaborative threat modeling, by leveraging the power of LLMs to do it.
→ Teams usually find it hard to maintain a uniform approach to threat modeling.
→ Many teams lack specialized security knowledge.
→ Traditional threat modeling is time-intensive.
→ Adapting threat modeling to different projects is not easy.
→ This course will introduce AI-driven frameworks that will guide you to establish repeatable, consistent threat modeling processes to guarantee uniform security measures across all projects.
→ We will break down complex cybersecurity concepts into digestible modules and simulate real-world scenarios to equip you with the necessary skills regardless of your expertise level.
→ With Generative AI and Large Language Models, you will learn methods to significantly reduce the time required for thorough threat analysis without compromising depth or quality.
→ We made sure that the course content is enriched with agile and story-driven threat modeling practices to provide you with the skills to tailor your threat modeling efforts to align with the rapid changes in the threat landscape.
Our promise is to equip you with cutting-edge skills and knowledge honed from our extensive background in threat modeling and security automation.
You’ll leave with more than newly acquired skills.
How about a fresh perspective on how to approach and solve complex security challenges?
Get ready to be on the frontline of Threat Modeling.
Our training is oriented towards implementing Threat Modeling within your Engineering Team, by leveraging the power of GenAI and LLMs. The Training is a hardcore hands-on training that immerses engineering and security teams into practices around collaborative threat modeling, by leveraging the power of LLMs to do it.
The training takes the team through an exploration of different activities to be performed in the Threat Modeling process, right from identifying the scope of the threat model, to performing Threat Analysis to identifying countermeasures and security test cases against the Threat Analysis.
The training additionally delves into Agile and Story-Driven Threat Modeling that teams can do for Epics and Features, as part of their Agile Software Development Lifecycles. Finally the training dives into how Threat Modeling can be integrated with different parts of the SDLC and how it can integrate with activities involving a Secure SDLC, including DevSecOps.
→ Overview of Threat Modeling
→ Importance and benefits of Threat Modeling
→ Common challenges in traditional Threat Modeling approaches
→ Important Terms for Threat Modeling Success
→ Introduction to Generative AI and LLMs
→ How GenAI and LLMs can revolutionize Threat Modeling
- → Overview of current GenAI and LLM tools applicable to Threat Modeling
→ Components of a Good Threat Model
→ Inputs to Threat Modeling
→ Outputs of Threat Modeling and applicability across the SDLC
→ Success Factors: Conducting a Threat Model
→ Threat Modeling Process Breakdown:
- Scoping
- Threat Analysis - with/without methodology
- Countermeasures
- Verification Activities
→ Formulating Security Objectives and Requirements
→ Decomposing a System for Threat Modeling
→ Trust Zones and Trust Boundaries for Threat Modeling Scope
→ Collaborative Hands-on Labs, Define Threat Modeling Scope:
- Define Security Objectives and Requirements manually + with LLM and verify
- Perform Trust Zone Mapping against architecture manually + with LLM and verify
- Capturing Data Dictionary for subsequent Threat Modeling manually + with LLM
- Challenges - for students to work on - Optimizing Prompts, etc.
→ Writing a good Threat Scenario - Elements of Attack Vector, Attack Outcome and Attack Motivation
→ STRIDE Deep-Dive
→ Collaborative Hands-on Labs on STRIDE without and with LLMs
→ Additional Parameters for Threat Analysis Success
- Risk Centric vs Data Centric Threat Modeling Methodologies
- NIST SP-800-154 Data Centric Threat Modeling Methodology - Breakdown and Deep-Dive
- PASTA Threat Modeling Methodology Breakdown and Deep-Dive
- Choosing a Threat Modeling Methodology
- Hands-on Labs: Leveraging LLMs and NIST SP-800-154 for System-Threat Modeling
- System driven vs Story-Driven Threat Modeling
- Approach and Process Deep-dive
- Threat Modeling User Stories/Feature Essays - Deep-DiveAbuser Stories and User Stories
- Hands-on Labs:
- Abuser Stories for User Stories and Epics with LLMs
- Identifying Threat Scenarios for Abuser Stories with LLMs
- Prompt Engineering Practices for Story-Driven Threat Modeling
- Leveraging Threat Modeling Outputs to define Countermeasures and Controls
- Hands-on Labs:
- Leverage LLMs to define Threat Analysis Countermeasures
- Define Security Acceptance Tests for Features based on Threat Analysis
- Leverage LLMs for Developer Security Checklists with LLMs
- Threat Models as Inputs to Security Verification Activities
- Leverage Threat Modeling outputs for Security Verification Activities:
- Pentesting with Security Test Cases
- Red-Teaming
- Secure Code Review
- Architecture Reviews
- Vulnerability Analysis
- Hands-on Labs:
- Define Security Test Cases based on Threat Modeling Outputs with LLMs
- Secure Code Review Params using Threat Modeling outputs aided by LLMs
- Come up with Architecture Review Questions based on Threat Modeling Outputs - aided by LLMs
- Chat with your Vulnerability Data - RAG example with LLM and Threat Modeling Outputs
Our labs are on-demand and self-serve.
Users just need a browser with an internet connection to be able to work on our lab environments.
Each Lab is served on a cloud VM that has the code and dependencies preloaded for ease of use.
In case of AI and LLMs, the users are provided with an OpenAI sandbox that allows them to safely access OpenAI and other LLMs without the organization needing to provision it.
Live Online Bootcamp for Rapid Threat Modeling with GenAI and LLM - June 6 and 7
Access to recorded class for upto one year after the bootcamp
Digitally Verifiable Certificate of Completion for the Training - Showcase online and for CPEs
OpenAI LLM Sandbox to be able to work with live LLMs during and after the class. No need for OpenAI ChatGPT+
FREE Access to AppSecEngineer's Individual Pro Plus Plan with all Cloud Sandboxes. Gives you access to all learning paths, courses and challenges.
Exclusive Discord Channel for support and questions during and after the class
Access to AppSecEngineer Live Events for a whole year
You need to have some basic knowledge of Application Security Vulnerabilities like OWASP Top 10, etc. Prior experience with Threat Modeling will help, but not mandatory.
You will need a laptop/tablet with an updated browser and access to the internet. All the labs and course content related to this course will be done through the browser. Ensure that your laptop/tablet doesnt have content-filtering restrictions so you can access the labs at ease through the browser.
Yes, please contact us through the support widget on the bottom righ corner of this website and we can figure out discounts for you and your group
However, these lab environments are primarily meant for learning cloud security, and run for a limited time period before shutting down automatically. This time period will give you plenty of time to complete the lab, so you'll still be able to play around with the environment afterwards.
Yes - you get a certificate that clearly specifies the number of horus and the nature of the training. This is a digitally verifiable credential that you can share on Social Media and on your profile. You can use this certificate to claim CPE credits for specific certifications.
Yes - you do get access to recordings for upto a whole year after the bootcamp has been completed.
You also get all of this:
- Live Online Bootcamp for Rapid Threat Modeling with GenAI and LLM - June 6 and 7
- Access to recorded class for upto one year after the bootcamp
- Certificate of Completion for the Training - to showcase online and for CPEs
- OpenAI LLM Sandbox to be able to work with live LLMs during and after the class. No need for OpenAI ChatGPT+
- FREE Access to AppSecEngineer's Individual Pro Plus Plan with all Cloud Sandboxes - Gives you access to all learning paths, courses and challenges
- Exclusive Discord Channel for support and questions during and after the class
- Access to AppSecEngineer Live Events for a whole year
Yes - you get access to our highest tier of individual membership (Pro Plus). This gives you access to all our learning paths, cloud sandboxes, challenges, playgrounds and live-events on the platform for a whole year. You essentially get $950 worth of content free just by registering for this bootcamp!
For the course, you’ll need to use LLMs. Usually you’d need to have access to APIs or ChatGPT+, but we’ve made things effortless for you. When you register for this bootcamp, you get access to our LLM sandbox, which means you don’t need any additional API Keys or access to ChatGPT+ which is $20 per month. You can run the latest models of OpenAI required for our labs without having to bother about any of that. Oh, and you get access to the LLM Sandbox for the whole year 🙂
Our Chief Research Officer, Abhay Bhargav will be the lead trainer for this class. He’s got a ton of experience both performing and teaching threat modeling. He’s built tools specially for this class to help with GenAI and LLM Threat Modeling. Check out his profile on Linkedin and Twitter.
The bootcamp will be live online, which means that you don’t need to travel anywhere. Yay!Details related to the meeting link, etc will be sent to you one week before the training starts. You will need access to AppSecEngineer platform for the labs, which you will have access to from the day you register for the bootcamp. The labs related to the bootcamp on June 6 and 7 will be available on those days and after that.