“Learning wisdom the hard way.”
That line stuck with me ever since I heard it from Jolee Bindo, a character from Star Wars: Knights of the Old Republic (2003). And today, that statement holds more weight than ever in the world of Application Security.
Let me explain why through a story from a project I worked on in 2017. It perfectly captures why most organizations get AppSec training wrong, and what it takes to fix it.
In 2017, I was working with a company that developed banking software. They were preparing for a major evaluation by one of the largest shipping insurance companies in the world. This was a high-stakes, multi-year deal. The client would perform deep security assessments, so the software vendor wanted to get ahead of it.
They brought us in to run rigorous penetration tests and help them build out a DevSecOps pipeline, called SecDevOps at the time. We also helped them implement an early version of what’s now known as Application Security Posture Management (ASPM). This system aggregated scan results from multiple tools and provided a unified vulnerability view.
The pentest went well. We found and remediated several critical issues, and everything seemed on track. Within a few weeks, the automation pipeline was fully integrated into their CI/CD setup.
That’s when the panic started.
By the end of the first week of automation, I started getting late-night calls. The client was alarmed. Their pipeline was suddenly flagging 700+ high and critical vulnerabilities despite just passing a comprehensive pen test.
We convened an emergency meeting with all engineering leads and began reviewing the scan data.
It quickly became clear: the flood of alerts was coming from Software Composition Analysis (SCA) tools. These tools were picking up on vulnerable open-source libraries bundled into the application, many of which weren’t even part of active functionality.
The response from the engineers?
“These are false positives.”
“This code can’t be triggered by any user.”
Technically, they weren’t wrong. But the truth was worse.
What we discovered was that the application had accumulated a large amount of unused, unmaintained, and insecure code, mostly in the form of third-party libraries. These libraries weren’t actively used by any production functionality but were still packaged and deployed.
Even if code isn’t reachable by users (a concept we now refer to as reachability), it’s still present in the runtime environment. Attackers who gain some level of access, whether through RCE, SSRF, or supply chain compromise, can exploit these libraries.
This wasn’t just technical debt. It was RISK.
So we gave ourselves a name: The Cleanup Crew.
We, along with the client’s engineers, spent the next 12 months removing unused libraries, sanitizing dependencies, and rewriting insecure integration points. It was a painful and expensive lesson but it worked.
What started as a cleanup project turned into an organization-wide transformation. The client realized that the root of the problem wasn’t just bad code, but the lack of security ownership within development teams.
Here’s what changed:
And this wasn’t off-the-shelf training. We used real data from the client’s environment to build contextual, relevant, scenario-based sessions.
Each session targeted the most frequent or high-risk issues seen in their last few sprints. Developers could immediately connect the training content to the real-world flaws in their codebase.
Too many organizations treat training as a checkbox. They buy an annual license, roll out compliance modules, and call it a day. Usage spikes in Q1 and then plummets.
The result?
Low adoption. No behavior change. And poor return on investment.
And when it’s time for budget renewals, security leaders and L&D teams are left scrambling to justify the spend.
You can’t solve that with more LMS logins or longer courses. You solve it by making training useful.
Here’s what worked for this client and what we recommend for any enterprise:
If six months after training, you’re seeing fewer hard-coded credentials or less XSS in sprints, that’s your ROI.
We were lucky in 2017. The issue was caught internally before the client’s customer could flag it. And they were proactive enough to act fast. But not everyone gets that second chance.
Today, tools are better. SCA platforms now include reachability analysis. DevSecOps pipelines are standard. AI helps prioritize exploitable flaws over noise.
At the same time, there’s a cultural problem that slows teams down in ways they don’t always realize: vibe coding.
Just diving in and building without thinking about long-term design or secure defaults might feel productive. But over time, that lack of discipline results in fragile software, risky dependencies, and downstream security debt that’s hard to unwind. It feels fast, but it actually costs you more in rework, patching cycles, and missed customer deadlines.
That’s why the best Product Security teams code for the paved road, making secure choices the default and building development environments that guide teams away from dangerous patterns before they ship.
But none of it matters if your developers aren’t equipped to fix the problems they create, or better yet, avoid introducing them in the first place. It’s how you scale secure engineering practices across large teams and ensure that security improvements actually stick sprint over sprint.
And if you’re looking for the right platform to do that, we put together things to look for in a security training platform, including how to spot red flags and what to prioritize.
Need to scale AppSec training practices across your teams? AppSecEngineer for Enterprises delivers contextual, hands-on security training aligned with your development workflows.