Anthropic’s release of Claude Code Security triggered a (now) familiar news cycle.
Within hours, social feeds filled with hot takes and security stocks started tanking. Commentators suggested that if frontier models can deeply reason over entire repositories, perhaps traditional AppSec is no longer necessary.
It’s a compelling narrative. It’s also the wrong one.
What we’re witnessing is not the deletion of application security, we’re witnessing the disruption of the way we’ve historically implemented it.
Huge difference.
For years, AppSec has operated in fragmentation. We built categories around specific detection techniques: SAST for static flaws, DAST for runtime behavior, SCA for dependencies, secrets scanners for credential leakage, and so on. Each tool generated findings and those findings get normalized and get aggregated into dashboards either by homegrown tools or yet another category, ASPM. AppSec teams were forced to walk the line between noisy output and developer happiness.
This is an approach that grew out of 20 years of problems and tools that were mostly just advanced pattern-matching.
The approach was rational but never elegant.
Large language models that reason about data flow, authorization, and architectural intent challenge that entire paradigm. When a model can synthesize context instead of simply matching rules, the “tool salad” approach begins to look like a workaround for technical constraints that no longer exist.
This is the disruption.
It’s not a disruption for the need of AppSec. It’s a disruption of the scaffolding we built to compensate for the limitation of tooling.
The dramatic headline that AppSec departments will simply disappear ignores how real organizations operate. Enterprises do not eliminate risk ownership because a new capability appears. Here’s my three reasons why.
Why AppSec isn't Going Away
First, AppSec requires separation of duties. No serious organization will accept a world in which the same system that generates code is also the final authority on whether that code is safe. Governance, compliance, and fiduciary responsibility demand independent evaluation and even if a frontier model could perfectly write code or perfectly identify vulnerabilities, enterprises would still require a distinct security function.
Second, AppSec is not a one-time event. Codebases are living systems and AI-assisted development accelerates change velocity. A single code review of a static repository, even a sophisticated one, does not solve the longitudinal problem of risk accumulation and drift. In the AI-era, we’ll still need a security function that reasons about deltas (code diffs), intent, and impact 24/7.
Third, AppSec is organizational before it’s technical. A vulnerability does not exist in isolation; it exists in the context of business criticality and the compensating controls of the org. Determining whether something matters requires an understanding of the environment.
These realities do not disappear because a model can reason more effectively.
However, AI changes how we should build AppSec programs.
The event-aggregation model that dominated the last decade is showing its age. Pulling findings from disparate scanners into a centralized platform and prioritizing them via API is an after-the-fact exercise. It treats security as the normalization of noise rather than the generation of insight. (I have gone on long rants in conference talks in the past about how we allowed ourselves to largely leave engineering and move towards actuarial science. Sorry for the rants, but I still believe them.)
Enter Code Security Intelligence
The next era requires something different: Code Security Intelligence.
Code Security Intelligence is not a bundle of scanners. It is a system that understands software the way a senior security engbineer would by reconstructing intent, modeling behavior, evaluating exposure, and aligning technical findings with organizational policy. It operates at the risk layer rather than the event detection layer. Or said another way, it reasons about how code behaves in context, not merely whether a pattern appears in isolation.
Over the past year at DryRun Security, we have seen firsthand how deep, context-aware analysis surfaces issues that legacy tooling simply misses. In one engagement, our repo scanning product identified twenty real zero-days that had eluded existing scanners for years. Seriously. The surprise was not just that leveraging AI could find them; it was that organizations realized their existing tooling was failing them. You can’t ASPM your way out of that.
Anthropic’s announcement is not a signal that AppSec is obsolete. It’s confirmation that deep reasoning over code is now table stakes and the tools that cannot reason will struggle and most of the tool categories in the space will collapse.
But the discipline of application security is not getting deleted. If anything, it becomes more critical as AI accelerates code velocity and increases architectural complexity. The attack surface expands in tandem with productivity gains. The need for intelligent, context-aware risk evaluation grows accordingly.
The future of AppSec will not be defined by more scanners. It will be defined by systems capable of true Code Security Intelligence.



