By clicking “Accept All Cookies”, you agree to the storing of cookies on your device to enhance site navigation, analyze site usage, and assist in our marketing efforts. View our Privacy Policy for more information.
ToolAccuracy of FindingsDetects Non-Pattern-Based Issues?Coverage of SAST FindingsSpeed of ScanningUsability & Dev Experience
DryRun SecurityVery high – caught multiple critical issues missed by othersYes – context-based analysis, logic flaws & SSRFBroad coverage of standard vulns, logic flaws, and extendableNear real-time PR feedback
Snyk CodeHigh on well-known patterns (SQLi, XSS), but misses other categoriesLimited – AI-based, focuses on recognized vulnerabilitiesGood coverage of standard vulns; may miss SSRF or advanced auth logic issuesFast, often near PR speedDecent GitHub integration, but rules are a black box
GitHub Advanced Security (CodeQL)Very high precision for known queries, low false positivesPartial – strong dataflow for known issues, needs custom queriesGood for SQLi and XSS but logic flaws require advanced CodeQL experience.Moderate to slow (GitHub Action based)Requires CodeQL expertise for custom logic
SemgrepMedium, but there is a good community for adding rulesPrimarily pattern-based with limited dataflowDecent coverage with the right rules, can still miss advanced logic or SSRFFast scansHas custom rules, but dev teams must maintain them
SonarQubeLow – misses serious issues in our testingLimited – mostly pattern-based, code quality orientedBasic coverage for standard vulns, many hotspots require manual reviewModerate, usually in CIDashboard-based approach, can pass “quality gate” despite real vulns
Vulnerability ClassSnyk (partial)GitHub (CodeQL) (partial)SemgrepSonarQubeDryRun Security
SQL Injection
*
Cross-Site Scripting (XSS)
SSRF
Auth Flaw / IDOR
User Enumeration
Hardcoded Token
ToolAccuracy of FindingsDetects Non-Pattern-Based Issues?Coverage of C# VulnerabilitiesScan SpeedDeveloper Experience
DryRun Security
Very high – caught all critical flaws missed by others
Yes – context-based analysis finds logic errors, auth flaws, etc.
Broad coverage of OWASP Top 10 vulns plus business logic issuesNear real-time (PR comment within seconds)Clear single PR comment with detailed insights; no config or custom scripts needed
Snyk CodeHigh on known patterns (SQLi, XSS), but misses logic/flow bugsLimited – focuses on recognizable vulnerability patterns
Good for standard vulns; may miss SSRF or auth logic issues 
Fast (integrates into PR checks)Decent GitHub integration, but rules are a black box (no easy customization)
GitHub Advanced Security (CodeQL)Low - missed everything except SQL InjectionMostly pattern-basedLow – only discovered SQL InjectionSlowest of all but finished in 1 minuteConcise annotation with a suggested fix and optional auto-remedation
SemgrepMedium – finds common issues with community rules, some missesPrimarily pattern-based, limited data flow analysis
Decent coverage with the right rules; misses advanced logic flaws 
Very fast (runs as lightweight CI)Custom rules possible, but require maintenance and security expertise
SonarQube
Low – missed serious issues in our testing
Mostly pattern-based (code quality focus)Basic coverage for known vulns; many issues flagged as “hotspots” require manual review Moderate (runs in CI/CD pipeline)Results in dashboard; risk of false sense of security if quality gate passes despite vulnerabilities
Vulnerability ClassSnyk CodeGitHub Advanced Security (CodeQL)SemgrepSonarQubeDryRun Security
SQL Injection (SQLi)
Cross-Site Scripting (XSS)
Server-Side Request Forgery (SSRF)
Auth Logic/IDOR
User Enumeration
Hardcoded Credentials
VulnerabilityDryRun SecuritySemgrepGitHub CodeQLSonarQubeSnyk Code
1. Remote Code Execution via Unsafe Deserialization
2. Code Injection via eval() Usage
3. SQL Injection in a Raw Database Query
4. Weak Encryption (AES ECB Mode)
5. Broken Access Control / Logic Flaw in Authentication
Total Found5/53/51/51/50/5
VulnerabilityDryRun SecuritySnykCodeQLSonarQubeSemgrep
Server-Side Request Forgery (SSRF)
(Hotspot)
Cross-Site Scripting (XSS)
SQL Injection (SQLi)
IDOR / Broken Access Control
Invalid Token Validation Logic
Broken Email Verification Logic
DimensionWhy It Matters
Surface
Entry points & data sources highlight tainted flows early.
Language
Code idioms reveal hidden sinks and framework quirks.
Intent
What is the purpose of the code being changed/added?
Design
Robustness and resilience of changing code.
Environment
Libraries, build flags, and infra metadata flag, infrastructure (IaC) all give clues around the risks in changing code.
KPIPattern-Based SASTDryRun CSA
Mean Time to Regex
3–8 hrs per noisy finding set
Not required
Mean Time to Context
N/A
< 1 min
False-Positive Rate
50–85 %< 5 %
Logic-Flaw Detection
< 5 %
90%+
Severity
CriticalHigh
Location
utils/authorization.py :L118
utils/authorization.py :L49 & L82 & L164
Issue
JWT Algorithm Confusion Attack:
jwt.decode() selects the algorithm from unverified JWT headers.
Insecure OIDC Endpoint Communication:
urllib.request.urlopen called without explicit TLS/CA handling.
Impact
Complete auth bypass (switch RS256→HS256, forge tokens with public key as HMAC secret).
Susceptible to MITM if default SSL behavior is weakened or cert store compromised.
Remediation
Replace the dynamic algorithm selection with a fixed, expected algorithm list. Change line 118 from algorithms=[unverified_header.get('alg', 'RS256')] to algorithms=['RS256'] to only accept RS256 tokens. Add algorithm validation before token verification to ensure the header algorithm matches expected values.
Create a secure SSL context using ssl.create_default_context() with proper certificate verification. Configure explicit timeout values for all HTTP requests to prevent hanging connections. Add explicit SSL/TLS configuration by creating an HTTPSHandler with the secure SSL context. Implement proper error handling specifically for SSL certificate validation failures.
Key Insight
This vulnerability arises from trusting an unverified portion of the JWT to determine the verification method itself
This vulnerability stems from a lack of explicit secure communication practices, leaving the application reliant on potentially weak default behaviors.
AI in AppSec
February 23, 2026

Disruption, Not Deletion: How AI Is Rewiring Application Security

Anthropic’s release of Claude Code Security triggered a (now) familiar news cycle.

Within hours, social feeds filled with hot takes and security stocks started tanking. Commentators suggested that if frontier models can deeply reason over entire repositories, perhaps traditional AppSec is no longer necessary.

It’s a compelling narrative. It’s also the wrong one.

What we’re witnessing is not the deletion of application security, we’re witnessing the disruption of the way we’ve historically implemented it.

Huge difference.

For years, AppSec has operated in fragmentation. We built categories around specific detection techniques: SAST for static flaws, DAST for runtime behavior, SCA for dependencies, secrets scanners for credential leakage, and so on. Each tool generated findings and those findings get normalized and get aggregated into dashboards either by homegrown tools or yet another category, ASPM. AppSec teams were forced to walk the line between noisy output and developer happiness.

This is an approach that grew out of 20 years of problems and tools that were mostly just advanced pattern-matching.

The approach was rational but never elegant.

Large language models that reason about data flow, authorization, and architectural intent challenge that entire paradigm. When a model can synthesize context instead of simply matching rules, the “tool salad” approach begins to look like a workaround for technical constraints that no longer exist.

This is the disruption.

It’s not a disruption for the need of AppSec. It’s a disruption of the scaffolding we built to compensate for the limitation of tooling.

The dramatic headline that AppSec departments will simply disappear ignores how real organizations operate. Enterprises do not eliminate risk ownership because a new capability appears. Here’s my three reasons why.

Why AppSec isn't Going Away

First, AppSec requires separation of duties. No serious organization will accept a world in which the same system that generates code is also the final authority on whether that code is safe. Governance, compliance, and fiduciary responsibility demand independent evaluation and even if a frontier model could perfectly write code or perfectly identify vulnerabilities, enterprises would still require a distinct security function.

Second, AppSec is not a one-time event. Codebases are living systems and AI-assisted development accelerates change velocity. A single code review of a static repository, even a sophisticated one, does not solve the longitudinal problem of risk accumulation and drift. In the AI-era, we’ll still need a security function that reasons about deltas (code diffs), intent, and impact 24/7.

Third, AppSec is organizational before it’s technical. A vulnerability does not exist in isolation; it exists in the context of business criticality and the compensating controls of the org. Determining whether something matters requires an understanding of the environment.

These realities do not disappear because a model can reason more effectively.

However, AI changes how we should build AppSec programs.

The event-aggregation model that dominated the last decade is showing its age. Pulling findings from disparate scanners into a centralized platform and prioritizing them via API is an after-the-fact exercise. It treats security as the normalization of noise rather than the generation of insight. (I have gone on long rants in conference talks in the past about how we allowed ourselves to largely leave engineering and move towards actuarial science. Sorry for the rants, but I still believe them.)

Enter Code Security Intelligence

The next era requires something different: Code Security Intelligence.

Code Security Intelligence is not a bundle of scanners. It is a system that understands software the way a senior security engbineer would by reconstructing intent, modeling behavior, evaluating exposure, and aligning technical findings with organizational policy. It operates at the risk layer rather than the event detection layer. Or said another way, it reasons about how code behaves in context, not merely whether a pattern appears in isolation.

Over the past year at DryRun Security, we have seen firsthand how deep, context-aware analysis surfaces issues that legacy tooling simply misses. In one engagement, our repo scanning product identified twenty real zero-days that had eluded existing scanners for years. Seriously. The surprise was not just that leveraging AI could find them; it was that organizations realized their existing tooling was failing them. You can’t ASPM your way out of that.

Anthropic’s announcement is not a signal that AppSec is obsolete. It’s confirmation that deep reasoning over code is now table stakes and the tools that cannot reason will struggle and most of the tool categories in the space will collapse.

But the discipline of application security is not getting deleted. If anything, it becomes more critical as AI accelerates code velocity and increases architectural complexity. The attack surface expands in tandem with productivity gains. The need for intelligent, context-aware risk evaluation grows accordingly.

The future of AppSec will not be defined by more scanners. It will be defined by systems capable of true Code Security Intelligence.