By clicking “Accept All Cookies”, you agree to the storing of cookies on your device to enhance site navigation, analyze site usage, and assist in our marketing efforts. View our Privacy Policy for more information.
ToolAccuracy of FindingsDetects Non-Pattern-Based Issues?Coverage of SAST FindingsSpeed of ScanningUsability & Dev Experience
DryRun SecurityVery high – caught multiple critical issues missed by othersYes – context-based analysis, logic flaws & SSRFBroad coverage of standard vulns, logic flaws, and extendableNear real-time PR feedback
Snyk CodeHigh on well-known patterns (SQLi, XSS), but misses other categoriesLimited – AI-based, focuses on recognized vulnerabilitiesGood coverage of standard vulns; may miss SSRF or advanced auth logic issuesFast, often near PR speedDecent GitHub integration, but rules are a black box
GitHub Advanced Security (CodeQL)Very high precision for known queries, low false positivesPartial – strong dataflow for known issues, needs custom queriesGood for SQLi and XSS but logic flaws require advanced CodeQL experience.Moderate to slow (GitHub Action based)Requires CodeQL expertise for custom logic
SemgrepMedium, but there is a good community for adding rulesPrimarily pattern-based with limited dataflowDecent coverage with the right rules, can still miss advanced logic or SSRFFast scansHas custom rules, but dev teams must maintain them
SonarQubeLow – misses serious issues in our testingLimited – mostly pattern-based, code quality orientedBasic coverage for standard vulns, many hotspots require manual reviewModerate, usually in CIDashboard-based approach, can pass “quality gate” despite real vulns
Vulnerability ClassSnyk (partial)GitHub (CodeQL) (partial)SemgrepSonarQubeDryRun Security
SQL Injection
*
Cross-Site Scripting (XSS)
SSRF
Auth Flaw / IDOR
User Enumeration
Hardcoded Token
ToolAccuracy of FindingsDetects Non-Pattern-Based Issues?Coverage of C# VulnerabilitiesScan SpeedDeveloper Experience
DryRun Security
Very high – caught all critical flaws missed by others
Yes – context-based analysis finds logic errors, auth flaws, etc.
Broad coverage of OWASP Top 10 vulns plus business logic issuesNear real-time (PR comment within seconds)Clear single PR comment with detailed insights; no config or custom scripts needed
Snyk CodeHigh on known patterns (SQLi, XSS), but misses logic/flow bugsLimited – focuses on recognizable vulnerability patterns
Good for standard vulns; may miss SSRF or auth logic issues 
Fast (integrates into PR checks)Decent GitHub integration, but rules are a black box (no easy customization)
GitHub Advanced Security (CodeQL)Low - missed everything except SQL InjectionMostly pattern-basedLow – only discovered SQL InjectionSlowest of all but finished in 1 minuteConcise annotation with a suggested fix and optional auto-remedation
SemgrepMedium – finds common issues with community rules, some missesPrimarily pattern-based, limited data flow analysis
Decent coverage with the right rules; misses advanced logic flaws 
Very fast (runs as lightweight CI)Custom rules possible, but require maintenance and security expertise
SonarQube
Low – missed serious issues in our testing
Mostly pattern-based (code quality focus)Basic coverage for known vulns; many issues flagged as “hotspots” require manual review Moderate (runs in CI/CD pipeline)Results in dashboard; risk of false sense of security if quality gate passes despite vulnerabilities
Vulnerability ClassSnyk CodeGitHub Advanced Security (CodeQL)SemgrepSonarQubeDryRun Security
SQL Injection (SQLi)
Cross-Site Scripting (XSS)
Server-Side Request Forgery (SSRF)
Auth Logic/IDOR
User Enumeration
Hardcoded Credentials
VulnerabilityDryRun SecuritySemgrepGitHub CodeQLSonarQubeSnyk Code
1. Remote Code Execution via Unsafe Deserialization
2. Code Injection via eval() Usage
3. SQL Injection in a Raw Database Query
4. Weak Encryption (AES ECB Mode)
5. Broken Access Control / Logic Flaw in Authentication
Total Found5/53/51/51/50/5
VulnerabilityDryRun SecuritySnykCodeQLSonarQubeSemgrep
Server-Side Request Forgery (SSRF)
(Hotspot)
Cross-Site Scripting (XSS)
SQL Injection (SQLi)
IDOR / Broken Access Control
Invalid Token Validation Logic
Broken Email Verification Logic
DimensionWhy It Matters
Surface
Entry points & data sources highlight tainted flows early.
Language
Code idioms reveal hidden sinks and framework quirks.
Intent
What is the purpose of the code being changed/added?
Design
Robustness and resilience of changing code.
Environment
Libraries, build flags, and infra metadata flag, infrastructure (IaC) all give clues around the risks in changing code.
KPIPattern-Based SASTDryRun CSA
Mean Time to Regex
3–8 hrs per noisy finding set
Not required
Mean Time to Context
N/A
< 1 min
False-Positive Rate
50–85 %< 5 %
Logic-Flaw Detection
< 5 %
90%+
Severity
CriticalHigh
Location
utils/authorization.py :L118
utils/authorization.py :L49 & L82 & L164
Issue
JWT Algorithm Confusion Attack:
jwt.decode() selects the algorithm from unverified JWT headers.
Insecure OIDC Endpoint Communication:
urllib.request.urlopen called without explicit TLS/CA handling.
Impact
Complete auth bypass (switch RS256→HS256, forge tokens with public key as HMAC secret).
Susceptible to MITM if default SSL behavior is weakened or cert store compromised.
Remediation
Replace the dynamic algorithm selection with a fixed, expected algorithm list. Change line 118 from algorithms=[unverified_header.get('alg', 'RS256')] to algorithms=['RS256'] to only accept RS256 tokens. Add algorithm validation before token verification to ensure the header algorithm matches expected values.
Create a secure SSL context using ssl.create_default_context() with proper certificate verification. Configure explicit timeout values for all HTTP requests to prevent hanging connections. Add explicit SSL/TLS configuration by creating an HTTPSHandler with the secure SSL context. Implement proper error handling specifically for SSL certificate validation failures.
Key Insight
This vulnerability arises from trusting an unverified portion of the JWT to determine the verification method itself
This vulnerability stems from a lack of explicit secure communication practices, leaving the application reliant on potentially weak default behaviors.
DryRun Security News

DryRun Security Named a High Performer in SAST on G2 Spring 2026

DryRun Security Named a High Performer in SAST on G2 Spring 2026

4.9 out of 5 stars. Verified customers only.

G2 released its Spring 2026 reports this week, and DryRun Security was recognized as a High Performer in the Static Application Security Testing (SAST) category, earning a 4.9 out of 5 rating from verified users.

G2 rankings are based entirely on customer feedback. No analyst opinions. No vendor submissions. Just security teams sharing what is actually working in their environments.

Many of the reviews point to something security teams are dealing with right now. Engineering teams are shipping more code than ever, and more of it is written with help from AI tools like Cursor, Copilot, and Claude Code. Traditional scanners were built for a very different development pace, and teams are starting to feel that gap.

What the reviews actually say

The same theme shows up across many of the reviews. Traditional rule-based SAST tools do a good job catching syntax issues and known patterns, but they often struggle with vulnerabilities tied to authorization logic, business workflows, and how code behaves in context.

That’s where many real exploits tend to show up.

Customers called that out directly in their reviews.

“Catches Logic and Authorization Flaws Traditional SAST Often Misses”
“We use traditional SAST tools, but they mostly rely on rule-based analysis. DryRun focuses on understanding code intent and logical flow, which makes it effective at finding authorization flaws, broken object-level authorization, insecure direct object reference, and insecure business logic. As AI assistants such as Cursor or ChatGPT-based tools become more widely adopted, we face new risks from AI-authored code. DryRun helps us focus specifically on the logic flaws that show up in AI-generated code snippets, issues that traditional scanners often miss.” - Jabez A., Director of Product Security Architecture (5/5)

Another reviewer summarized it this way.

“DryRun’s Context-Aware Scanning Beats Legacy SAST”
“DryRun’s use of LLMs and inclusion of context about the application makes it perform far better than traditional SAST tools. It is able to find business logic vulnerabilities that the legacy SAST scanners are simply unable to find." - Dan C., CTO (5/5)

Most traditional scanners focus on pattern matching. DryRun looks at how code behaves in context and surfaces the security impact directly in the pull request, giving security teams clearer visibility into what actually changed and what risk it might introduce.

High signal. Low noise.

Security tools only work if developers trust them. When scans generate too many false positives, the real issues get buried.

That signal quality shows up frequently in DryRun reviews.

“Next Gen of SAST Tool That Has Cutting Edge Tech”
“It provides value that other SAST tools have not provided but also is not noisy, and the high accuracy lets us find very critical bugs that have been missed in the past.” - Francis D., Lead AppSec Engineer (5/5)
“AppSec signal, not noise”
“DryRun Security gives me high-signal visibility into the changes that actually matter. It has become a practical way to scale AppSec review when PR volume is high.” - Todd B., CISO (4.5/5)

Built for the way security teams actually work

Another theme across the reviews is how naturally DryRun fits into development workflows. It installs once, automatically picks up new repositories, and surfaces findings directly in pull requests so developers can review and fix issues where they already work.

“Setup is a one-time process, and any new repos are scanned automatically. Findings appear as PR comments, which makes them easy for developers to notice, review, and act on.” - Chenkai G., Security Engineer (5/5)
“DryRun easily integrates into our existing build pipeline so that scans happen automatically and our developers get near real-time feedback on vulnerabilities in their code.” - Josh S., CEO / CISO (5/5)

Some reviewers also mentioned relying on DryRun as part of their daily code review process.

“We use several code review agents, but DryRun is the one we rely on to review the security of the code.” - Jonathan C., CTO (5/5)

Why this recognition matters in the SAST category

SAST has been around for a long time, and most security teams already run static analysis in their pipelines. It’s also a crowded category with tools that have existed for years.

What’s changing is how code is written and reviewed. Teams are shipping faster, codebases are larger, and AI-assisted development is becoming part of everyday workflows.

Traditional scanners were built to detect patterns. Today, security teams increasingly need tools that help them understand how code behaves and what risk new changes actually introduce.

DryRun approaches static analysis through that lens. By analyzing code changes in context and surfacing security impact directly in pull requests, security teams get a clearer picture of what new code actually does and where risk might appear. At DryRun, we think about that approach as Code Security Intelligence.

Recognition in the SAST category from G2 reflects how security teams are evaluating tools in real production environments today.

Thank you to our customers

We’re grateful to every team that took the time to share their experience on G2. That feedback helps other security teams evaluate tools and helps us continue improving DryRun.

If you’re curious what customers are saying, you can explore the full reviews on G2.

Read the G2 reviews | Book a demo