By clicking “Accept All Cookies”, you agree to the storing of cookies on your device to enhance site navigation, analyze site usage, and assist in our marketing efforts. View our Privacy Policy for more information.
ToolAccuracy of FindingsDetects Non-Pattern-Based Issues?Coverage of SAST FindingsSpeed of ScanningUsability & Dev Experience
DryRun SecurityVery high – caught multiple critical issues missed by othersYes – context-based analysis, logic flaws & SSRFBroad coverage of standard vulns, logic flaws, and extendableNear real-time PR feedback
Snyk CodeHigh on well-known patterns (SQLi, XSS), but misses other categoriesLimited – AI-based, focuses on recognized vulnerabilitiesGood coverage of standard vulns; may miss SSRF or advanced auth logic issuesFast, often near PR speedDecent GitHub integration, but rules are a black box
GitHub Advanced Security (CodeQL)Very high precision for known queries, low false positivesPartial – strong dataflow for known issues, needs custom queriesGood for SQLi and XSS but logic flaws require advanced CodeQL experience.Moderate to slow (GitHub Action based)Requires CodeQL expertise for custom logic
SemgrepMedium, but there is a good community for adding rulesPrimarily pattern-based with limited dataflowDecent coverage with the right rules, can still miss advanced logic or SSRFFast scansHas custom rules, but dev teams must maintain them
SonarQubeLow – misses serious issues in our testingLimited – mostly pattern-based, code quality orientedBasic coverage for standard vulns, many hotspots require manual reviewModerate, usually in CIDashboard-based approach, can pass “quality gate” despite real vulns
Vulnerability ClassSnyk (partial)GitHub (CodeQL) (partial)SemgrepSonarQubeDryRun Security
SQL Injection
*
Cross-Site Scripting (XSS)
SSRF
Auth Flaw / IDOR
User Enumeration
Hardcoded Token
ToolAccuracy of FindingsDetects Non-Pattern-Based Issues?Coverage of C# VulnerabilitiesScan SpeedDeveloper Experience
DryRun Security
Very high – caught all critical flaws missed by others
Yes – context-based analysis finds logic errors, auth flaws, etc.
Broad coverage of OWASP Top 10 vulns plus business logic issuesNear real-time (PR comment within seconds)Clear single PR comment with detailed insights; no config or custom scripts needed
Snyk CodeHigh on known patterns (SQLi, XSS), but misses logic/flow bugsLimited – focuses on recognizable vulnerability patterns
Good for standard vulns; may miss SSRF or auth logic issues 
Fast (integrates into PR checks)Decent GitHub integration, but rules are a black box (no easy customization)
GitHub Advanced Security (CodeQL)Low - missed everything except SQL InjectionMostly pattern-basedLow – only discovered SQL InjectionSlowest of all but finished in 1 minuteConcise annotation with a suggested fix and optional auto-remedation
SemgrepMedium – finds common issues with community rules, some missesPrimarily pattern-based, limited data flow analysis
Decent coverage with the right rules; misses advanced logic flaws 
Very fast (runs as lightweight CI)Custom rules possible, but require maintenance and security expertise
SonarQube
Low – missed serious issues in our testing
Mostly pattern-based (code quality focus)Basic coverage for known vulns; many issues flagged as “hotspots” require manual review Moderate (runs in CI/CD pipeline)Results in dashboard; risk of false sense of security if quality gate passes despite vulnerabilities
Vulnerability ClassSnyk CodeGitHub Advanced Security (CodeQL)SemgrepSonarQubeDryRun Security
SQL Injection (SQLi)
Cross-Site Scripting (XSS)
Server-Side Request Forgery (SSRF)
Auth Logic/IDOR
User Enumeration
Hardcoded Credentials
VulnerabilityDryRun SecuritySemgrepGitHub CodeQLSonarQubeSnyk Code
1. Remote Code Execution via Unsafe Deserialization
2. Code Injection via eval() Usage
3. SQL Injection in a Raw Database Query
4. Weak Encryption (AES ECB Mode)
5. Broken Access Control / Logic Flaw in Authentication
Total Found5/53/51/51/50/5
VulnerabilityDryRun SecuritySnykCodeQLSonarQubeSemgrep
Server-Side Request Forgery (SSRF)
(Hotspot)
Cross-Site Scripting (XSS)
SQL Injection (SQLi)
IDOR / Broken Access Control
Invalid Token Validation Logic
Broken Email Verification Logic
DimensionWhy It Matters
Surface
Entry points & data sources highlight tainted flows early.
Language
Code idioms reveal hidden sinks and framework quirks.
Intent
What is the purpose of the code being changed/added?
Design
Robustness and resilience of changing code.
Environment
Libraries, build flags, and infra metadata flag, infrastructure (IaC) all give clues around the risks in changing code.
KPIPattern-Based SASTDryRun CSA
Mean Time to Regex
3–8 hrs per noisy finding set
Not required
Mean Time to Context
N/A
< 1 min
False-Positive Rate
50–85 %< 5 %
Logic-Flaw Detection
< 5 %
90%+
Severity
CriticalHigh
Location
utils/authorization.py :L118
utils/authorization.py :L49 & L82 & L164
Issue
JWT Algorithm Confusion Attack:
jwt.decode() selects the algorithm from unverified JWT headers.
Insecure OIDC Endpoint Communication:
urllib.request.urlopen called without explicit TLS/CA handling.
Impact
Complete auth bypass (switch RS256→HS256, forge tokens with public key as HMAC secret).
Susceptible to MITM if default SSL behavior is weakened or cert store compromised.
Remediation
Replace the dynamic algorithm selection with a fixed, expected algorithm list. Change line 118 from algorithms=[unverified_header.get('alg', 'RS256')] to algorithms=['RS256'] to only accept RS256 tokens. Add algorithm validation before token verification to ensure the header algorithm matches expected values.
Create a secure SSL context using ssl.create_default_context() with proper certificate verification. Configure explicit timeout values for all HTTP requests to prevent hanging connections. Add explicit SSL/TLS configuration by creating an HTTPSHandler with the secure SSL context. Implement proper error handling specifically for SSL certificate validation failures.
Key Insight
This vulnerability arises from trusting an unverified portion of the JWT to determine the verification method itself
This vulnerability stems from a lack of explicit secure communication practices, leaving the application reliant on potentially weak default behaviors.
AI in AppSec
December 22, 2025

The Half Life (and Decay) of Static Rules in a Modern Codebase

How pattern based controls weaken as your codebase evolves

A security leader recently shared a story with us that keeps replaying in my mind.

Their team had invested serious time writing around thirty custom Semgrep rules to attempt to catch IDOR and authorization issues. It was a major lift. Weeks of a senior engineer’s time or months for more junior security engineers. Lots of cross-team conversations to isolate patterns and confirm where risky behavior showed up in production.

It worked at the start. The rules caught real issues, dashboards looked cleaner, and the team finally felt like they had a handle on a stubborn class of bugs.

Then the system started to decay.

Developers gradually learned what made the rules fire. They noticed which call shapes were watched and which were not. They discovered which directories triggered scrutiny and which ones slipped by. Some of this was organic adaptation. Some of it was intentional. New code landed in places that were not monitored yet. Functions were renamed. Structures changed just enough so that the pattern matchers would miss them.

The rules were still “on,” but the control they exerted over real risk was quietly fading. That fading is the half life of static rules.

Static rules in a changing environment

Static rules feel sturdy because they are written down, versioned, and tied to past incidents. But a codebase is not static. 

It resembles a dynamic chemical system where the underlying reactions shift as ingredients change.

Two forces drive the decay of static rules:

Natural causes

  • New frameworks, languages, and service boundaries reshape core patterns.

  • Architectural refactors alter call paths and data flow.

  • AI generated code and rapid iteration introduce novel shapes that do not resemble earlier examples.

Intentional avoidance

  • Developers under shipping pressure work around rules that create friction.

  • Teams move risky behavior into unmonitored directories or patterns.

  • Code is reorganized so that the rule no longer matches the intended target.

None of this is malicious. It is human behavior in a system where incentives and controls are misaligned. Studies on workplace security show that employees frequently bypass controls that disrupt workflow. Engineers, in particular, are known to adopt workarounds that persist long after the initial constraint has passed.

Rules become targets.

This is exactly what Goodhart’s law describes. Once a measure becomes a target, people optimize around the measure instead of the underlying goal. When the goal becomes “zero rule violations,” teams naturally focus on avoiding the alert rather than addressing the underlying risk. The meter reads green, but the risk is quietly migrating elsewhere.

Concept drift, without the updates

There is also a statistical side to this decay. In machine learning, concept drift happens when the relationship between inputs and outputs changes over time, leaving models trained on old data increasingly inaccurate. Security analytics has seen this for years, especially in malware detection, where fixed signatures lose value as attackers vary their behavior.

Static rules decay the same way. They remain fixed even as the codebase moves on. They treat yesterday’s patterns as if they still describe today’s risks. Over time the mismatch grows.

How this decay shows up

In most organizations, a static rule follows a predictable life cycle. It starts strong. It gradually loses touch with the system. Eventually it becomes background noise: too risky to delete, too inaccurate to trust, and no longer tied to the real shape of the application.

The story that opened this piece is simply an accelerated version. Instead of months of silent drift, developer adaptation sped the decay.

Designing controls with decay in mind

The answer is not to eliminate rules but to stop pretending they are permanent. Some useful principles emerge when you accept the half life of controls.

Treat rules as hypotheses, not laws. Write them with an expected expiration window. Decide upfront how they will be reviewed, replaced, or retired.

Tie rules to business risk, not just code fragments. Rules that exist only because someone once saw a dangerous line of code are fragile. Rules tied to the purpose of a system, such as a money movement API, last longer and adapt more naturally.

Favor contextual analysis over pattern matching. Tools that understand behavior, data flow, and intent are better suited for environments that shift quickly. Static rules can support this but should not carry the entire burden.

Pay attention to developer experience. The more a control feels like unnecessary drag, the faster it decays. The more it aligns with how teams actually build and ship, the longer it remains meaningful.

Static rules are still useful. They encode lessons learned and help catch known issues. But they are not timeless. In a living codebase they behave like unstable elements: strong at first, then increasingly noisy as the environment shifts.

If we start designing our security programs with the half life of controls in mind, we will spend less time enforcing outdated patterns and more time keeping our defenses aligned with the real system we are trying to protect.