How pattern based controls weaken as your codebase evolves
A security leader recently shared a story with us that keeps replaying in my mind.
Their team had invested serious time writing around thirty custom Semgrep rules to attempt to catch IDOR and authorization issues. It was a major lift. Weeks of a senior engineer’s time or months for more junior security engineers. Lots of cross-team conversations to isolate patterns and confirm where risky behavior showed up in production.
It worked at the start. The rules caught real issues, dashboards looked cleaner, and the team finally felt like they had a handle on a stubborn class of bugs.
Then the system started to decay.
Developers gradually learned what made the rules fire. They noticed which call shapes were watched and which were not. They discovered which directories triggered scrutiny and which ones slipped by. Some of this was organic adaptation. Some of it was intentional. New code landed in places that were not monitored yet. Functions were renamed. Structures changed just enough so that the pattern matchers would miss them.
The rules were still “on,” but the control they exerted over real risk was quietly fading. That fading is the half life of static rules.
Static rules in a changing environment
Static rules feel sturdy because they are written down, versioned, and tied to past incidents. But a codebase is not static.
It resembles a dynamic chemical system where the underlying reactions shift as ingredients change.
Two forces drive the decay of static rules:
Natural causes
- New frameworks, languages, and service boundaries reshape core patterns.
- Architectural refactors alter call paths and data flow.
- AI generated code and rapid iteration introduce novel shapes that do not resemble earlier examples.
Intentional avoidance
- Developers under shipping pressure work around rules that create friction.
- Teams move risky behavior into unmonitored directories or patterns.
- Code is reorganized so that the rule no longer matches the intended target.
None of this is malicious. It is human behavior in a system where incentives and controls are misaligned. Studies on workplace security show that employees frequently bypass controls that disrupt workflow. Engineers, in particular, are known to adopt workarounds that persist long after the initial constraint has passed.
Rules become targets.
This is exactly what Goodhart’s law describes. Once a measure becomes a target, people optimize around the measure instead of the underlying goal. When the goal becomes “zero rule violations,” teams naturally focus on avoiding the alert rather than addressing the underlying risk. The meter reads green, but the risk is quietly migrating elsewhere.
Concept drift, without the updates
There is also a statistical side to this decay. In machine learning, concept drift happens when the relationship between inputs and outputs changes over time, leaving models trained on old data increasingly inaccurate. Security analytics has seen this for years, especially in malware detection, where fixed signatures lose value as attackers vary their behavior.
Static rules decay the same way. They remain fixed even as the codebase moves on. They treat yesterday’s patterns as if they still describe today’s risks. Over time the mismatch grows.
How this decay shows up
In most organizations, a static rule follows a predictable life cycle. It starts strong. It gradually loses touch with the system. Eventually it becomes background noise: too risky to delete, too inaccurate to trust, and no longer tied to the real shape of the application.
The story that opened this piece is simply an accelerated version. Instead of months of silent drift, developer adaptation sped the decay.
Designing controls with decay in mind
The answer is not to eliminate rules but to stop pretending they are permanent. Some useful principles emerge when you accept the half life of controls.
Treat rules as hypotheses, not laws. Write them with an expected expiration window. Decide upfront how they will be reviewed, replaced, or retired.
Tie rules to business risk, not just code fragments. Rules that exist only because someone once saw a dangerous line of code are fragile. Rules tied to the purpose of a system, such as a money movement API, last longer and adapt more naturally.
Favor contextual analysis over pattern matching. Tools that understand behavior, data flow, and intent are better suited for environments that shift quickly. Static rules can support this but should not carry the entire burden.
Pay attention to developer experience. The more a control feels like unnecessary drag, the faster it decays. The more it aligns with how teams actually build and ship, the longer it remains meaningful.
Static rules are still useful. They encode lessons learned and help catch known issues. But they are not timeless. In a living codebase they behave like unstable elements: strong at first, then increasingly noisy as the environment shifts.
If we start designing our security programs with the half life of controls in mind, we will spend less time enforcing outdated patterns and more time keeping our defenses aligned with the real system we are trying to protect.


