By clicking “Accept All Cookies”, you agree to the storing of cookies on your device to enhance site navigation, analyze site usage, and assist in our marketing efforts. View our Privacy Policy for more information.
ToolAccuracy of FindingsDetects Non-Pattern-Based Issues?Coverage of SAST FindingsSpeed of ScanningUsability & Dev Experience
DryRun SecurityVery high – caught multiple critical issues missed by othersYes – context-based analysis, logic flaws & SSRFBroad coverage of standard vulns, logic flaws, and extendableNear real-time PR feedback
Snyk CodeHigh on well-known patterns (SQLi, XSS), but misses other categoriesLimited – AI-based, focuses on recognized vulnerabilitiesGood coverage of standard vulns; may miss SSRF or advanced auth logic issuesFast, often near PR speedDecent GitHub integration, but rules are a black box
GitHub Advanced Security (CodeQL)Very high precision for known queries, low false positivesPartial – strong dataflow for known issues, needs custom queriesGood for SQLi and XSS but logic flaws require advanced CodeQL experience.Moderate to slow (GitHub Action based)Requires CodeQL expertise for custom logic
SemgrepMedium, but there is a good community for adding rulesPrimarily pattern-based with limited dataflowDecent coverage with the right rules, can still miss advanced logic or SSRFFast scansHas custom rules, but dev teams must maintain them
SonarQubeLow – misses serious issues in our testingLimited – mostly pattern-based, code quality orientedBasic coverage for standard vulns, many hotspots require manual reviewModerate, usually in CIDashboard-based approach, can pass “quality gate” despite real vulns
Vulnerability ClassSnyk (partial)GitHub (CodeQL) (partial)SemgrepSonarQubeDryRun Security
SQL Injection
*
Cross-Site Scripting (XSS)
SSRF
Auth Flaw / IDOR
User Enumeration
Hardcoded Token
ToolAccuracy of FindingsDetects Non-Pattern-Based Issues?Coverage of C# VulnerabilitiesScan SpeedDeveloper Experience
DryRun Security
Very high – caught all critical flaws missed by others
Yes – context-based analysis finds logic errors, auth flaws, etc.
Broad coverage of OWASP Top 10 vulns plus business logic issuesNear real-time (PR comment within seconds)Clear single PR comment with detailed insights; no config or custom scripts needed
Snyk CodeHigh on known patterns (SQLi, XSS), but misses logic/flow bugsLimited – focuses on recognizable vulnerability patterns
Good for standard vulns; may miss SSRF or auth logic issues 
Fast (integrates into PR checks)Decent GitHub integration, but rules are a black box (no easy customization)
GitHub Advanced Security (CodeQL)Low - missed everything except SQL InjectionMostly pattern-basedLow – only discovered SQL InjectionSlowest of all but finished in 1 minuteConcise annotation with a suggested fix and optional auto-remedation
SemgrepMedium – finds common issues with community rules, some missesPrimarily pattern-based, limited data flow analysis
Decent coverage with the right rules; misses advanced logic flaws 
Very fast (runs as lightweight CI)Custom rules possible, but require maintenance and security expertise
SonarQube
Low – missed serious issues in our testing
Mostly pattern-based (code quality focus)Basic coverage for known vulns; many issues flagged as “hotspots” require manual review Moderate (runs in CI/CD pipeline)Results in dashboard; risk of false sense of security if quality gate passes despite vulnerabilities
Vulnerability ClassSnyk CodeGitHub Advanced Security (CodeQL)SemgrepSonarQubeDryRun Security
SQL Injection (SQLi)
Cross-Site Scripting (XSS)
Server-Side Request Forgery (SSRF)
Auth Logic/IDOR
User Enumeration
Hardcoded Credentials
VulnerabilityDryRun SecuritySemgrepGitHub CodeQLSonarQubeSnyk Code
1. Remote Code Execution via Unsafe Deserialization
2. Code Injection via eval() Usage
3. SQL Injection in a Raw Database Query
4. Weak Encryption (AES ECB Mode)
5. Broken Access Control / Logic Flaw in Authentication
Total Found5/53/51/51/50/5
VulnerabilityDryRun SecuritySnykCodeQLSonarQubeSemgrep
Server-Side Request Forgery (SSRF)
(Hotspot)
Cross-Site Scripting (XSS)
SQL Injection (SQLi)
IDOR / Broken Access Control
Invalid Token Validation Logic
Broken Email Verification Logic
DimensionWhy It Matters
Surface
Entry points & data sources highlight tainted flows early.
Language
Code idioms reveal hidden sinks and framework quirks.
Intent
What is the purpose of the code being changed/added?
Design
Robustness and resilience of changing code.
Environment
Libraries, build flags, and infra metadata flag, infrastructure (IaC) all give clues around the risks in changing code.
KPIPattern-Based SASTDryRun CSA
Mean Time to Regex
3–8 hrs per noisy finding set
Not required
Mean Time to Context
N/A
< 1 min
False-Positive Rate
50–85 %< 5 %
Logic-Flaw Detection
< 5 %
90%+
Severity
CriticalHigh
Location
utils/authorization.py :L118
utils/authorization.py :L49 & L82 & L164
Issue
JWT Algorithm Confusion Attack:
jwt.decode() selects the algorithm from unverified JWT headers.
Insecure OIDC Endpoint Communication:
urllib.request.urlopen called without explicit TLS/CA handling.
Impact
Complete auth bypass (switch RS256→HS256, forge tokens with public key as HMAC secret).
Susceptible to MITM if default SSL behavior is weakened or cert store compromised.
Remediation
Replace the dynamic algorithm selection with a fixed, expected algorithm list. Change line 118 from algorithms=[unverified_header.get('alg', 'RS256')] to algorithms=['RS256'] to only accept RS256 tokens. Add algorithm validation before token verification to ensure the header algorithm matches expected values.
Create a secure SSL context using ssl.create_default_context() with proper certificate verification. Configure explicit timeout values for all HTTP requests to prevent hanging connections. Add explicit SSL/TLS configuration by creating an HTTPSHandler with the secure SSL context. Implement proper error handling specifically for SSL certificate validation failures.
Key Insight
This vulnerability arises from trusting an unverified portion of the JWT to determine the verification method itself
This vulnerability stems from a lack of explicit secure communication practices, leaving the application reliant on potentially weak default behaviors.
AI in AppSec
November 4, 2025

Why LLM Vendors Aren’t the Same as Code Security Intelligence

Frontier AI assistants are amazing at writing code. They should never be your code security intelligence solution.

If you build or secure software at scale, you already see where this is going. Right now 90% of developers are using AI to write code as reported by the DORA report last week. LLMs have enabled vibe coding, allowing developers and non-developers alike to write better code, faster. That is a real win.

But asking the code‑generating model to also be your security authority is like asking the pitcher to call balls and strikes. Both the pitcher and umpire are skilled, but the roles are different. Both positions are essential for the game's success.

Frontier LLM vendors will keep getting better at “secure at write time” and perhaps a bit beyond. They will not become the independent solution for provable risk assurance over the whole codebase. Code security intelligence is about third‑party validation, full‑codebase context, policy‑driven governance, visibility, and auditability across the lifecycle.

Separation of duties is non‑negotiable

Modern security programs are built on separation of duties. The entity that creates code should not be the same authority that grants it a clean bill of health. Separation of duties is a foundational control in NIST Cybersecurity Framework (800‑53). OWASP makes it core to their Principles of Security.

“Secure at write time” is necessary, not sufficient

Leading models already prevent obvious syntax and security mistakes during code authoring, and will continue to improve code quality incrementally. True code security intelligence is comprehensive, reducing risk and building trust through:

  • Context and understanding beyond the immediate code update or pull request (PR)
  • Organization‑specific, enforceable policies
  • Feedback to developer and security teams solving complex business/logic issues
  • Insights and visibility built on learning and trends
  • Infuse“tribal knowledge” into a substrate level of intelligence about customer security programs
  • Evidence you can track, review on-demand, take to audit, and prove to the board

The DryRun Security AI-native, agentic code security intelligence solution was built for this. At commit our Code Review Agent and Custom Policy Agent use Contextual Security Analysis (CSA) to look beyond the current file or PR and reason about surrounding code, services, and history so it can surface risk and enforce policy in context. Our Codebase Insight Agent then goes beyond individual code updates to provide security insights across your organization that inform reporting, visibility, trends, audits, and more. 

Why “code security” will stay a checkbox for frontier vendors

Frontier LLM companies aren’t focused on your security program. Their focus is taking market share and making trillion-dollar bets while cranking out features that are just good enough to stop attrition. Follow the roadmaps. Frontier vendors are racing to own consumer and enterprise interfaces, coding environments, app marketplaces, browsers, search, and media. OpenAI is shipping Sora, Codex, a browser, a GPT Store, driving construction of huge data centers, and more. Anthropic is expanding productivity features like Artifacts and enterprise connectors while running multi-million dollar ad campaigns to consumers. They’re both investing hundreds of millions to build the next model that will be outdated in months. While these strategies will benefit many globally with ever-improving AI tools, code security will remain a minimum viable product checkbox, rather than a core innovation.

DryRun Security’s focus is deeper. Contextual. We built the industry’s first AI-native, agentic code security intelligence solution. We understand the context, intent, and environment of your codebase and your org. We secure software built for the future by helping developer and security teams quiet noise, gain insights, and surface risks that pattern-based scanning tools and AI coding tools inherently miss.

What IDE and frontier LLM assistants do well, and where they stop

None of this takes away from where frontier LLM assistants shine at code suggestions, explanations, common practices, and more. For example, OpenAI’s Aardvark moves beyond the IDE to do some research in the repo like standing up a sandbox to model reachability. Anthropic Claude security-review improves security by scanning for dependency vulnerabilities at write time which should help reduce classic security flaws and noise historically introduced by AI.

Of course, we work with these leading AI coding tools to accelerate and future-proof your development program while greatly improving security intelligence. Here's how:

Frontier AI assistants + DryRun Security
Capability Frontier LLM assistants DryRun code security intelligence
Primary goal Generate and explain code Independently validate code risk at commit, and intelligence across repos
Scope of view Current file or PR; require further prompting to go deeper Agentically look beyond the PR, with history and architecture
How security shows up Review, Suggestions, and autofixes Evidence-backed findings, risk prioritization, AppSec insights, and audit artifacts
Policy model Few guardrails, mostly general safety Natural-language, highly orchestrated, agentic policies enforceable at scale with an assistant to help build and manage them
Governance Minimal PR gating, quality bars, exception workflows with approvals and expiry
Evidence Limited Linked evidence, rationale, and context for each finding
Multi-repo and services Partial Unified code intelligence across repos, services, and teams
Compliance Not designed for it SOC 2, ISO 27001, NIST SSDF mappings and reporting-ready outputs
Who it’s for Students, individual developers, or small teams Developers and AppSec leaders who want risk reduction and insight

Note on autofix: IDE assistants often propose autofixes for known scanner alerts, often with limited context. DryRun Security does not auto‑modify your code. It explains risk and links evidence so your team stays in control and integrates with your AI coding assistant.

Bottom line

Frontier LLMs will keep getting better at helping create high-velocity code at write time. They will need to be complemented by an independent, agentic code security intelligence solution that can see across your codebase, enforce your policies, build insights, and give you evidence you can defend.

Let the pitcher pitch. Let DryRun Security be your umpire.

Request a DryRun Security demo dryrun.security/get-a-demo

Learn more about Contextual Security Analysis https://www.dryrun.security/resources/csa-guide