Monday, April 20, 2026

Overmatch, Not Obsolescence: How to Think About AI and the Security Operations Fight

 

Overmatch, Not Obsolescence: How to Think About AI and the Security Operations Fight

TL;DR: AI gives attackers near-term advantage. The response is containment architecture, not capability matching. Long-term the economics favor defenders. Getting there is the problem.


There are two ways to be wrong about AI and security. The first is panic: AI makes defenders obsolete, the game is over, buy something. The second is cheerleading: AI is the great equalizer, defenders finally have the tools to win. Both framings share the same flaw. They treat this as a question about technology instead of a question about competitive advantage in a specific operational environment, at a specific moment in time.

The military has a cleaner vocabulary for this. Technological overmatch describes a condition where one capability dominates another in a given context. It is not the same as obsolescence. A weapon system isn't obsolete until it can no longer generate effects on the enemy. Until that point, it may be outmatched in certain conditions while retaining decisive advantage in others. GPS jamming doesn't make aircraft obsolete. It degrades a capability in a specific environment and forces adaptation. The aircraft still flies.

Defenders are not obsolete. In specific conditions, right now, they are outmatched. Those are different problems requiring different responses.


Security has always been a time equation

In 1998, P.D. Herrmann formalized what practitioners already knew: a system is secure only if the time required to breach it exceeds the time required to detect and respond. Written as P > D+R, the model strips the problem to its variables. You win by making P large, D+R small, or both.

SOCs have been measuring this for years: mean time to detect, mean time to respond and contain. The distribution isn't normal, and that matters. Median performance tells you about the routine. P95 tells you about the stress cases. Maximum tells you about the incidents that actually end careers. Executives fixate on the max, and they're right to: that's where you actually lose.

Boyd's OODA loop maps directly onto this. The side that cycles through Observe, Orient, Decide, Act faster than the other forces the slower side into a permanent reactive posture. Boyd developed the framework to explain air combat; it applies without modification to a SOC managing an active intrusion. The attacker observes your environment, orients to your defenses, decides on a path, and acts. You observe their activity, orient your analysts, decide on a response, and act. Whoever completes that cycle faster controls the tempo.

AI compresses attacker cycle time. That is the actual problem. Not obsolescence. The clock runs faster, and the current D+R side of the equation wasn't built for the new speed.


Two types of attacker, one collapsing cost curve

Threat actors don't form a single population, and the distinctions matter because the risk calculus is different.

Commodity attackers — ransomware crews, initial access brokers, smash-and-grab operations — optimize for volume and speed. They burn through targets quickly, accept high noise levels, and profit from scale. A technique that works on 3% of targets is viable if you can run it against a hundred thousand of them before defenders respond.

APT actors operate on different economics. Nation-state groups and sophisticated criminal organizations running long-term campaigns have historically imposed their own discipline. Burning a novel exploit or a carefully developed tradecraft chain has replacement cost: time, money, and exposure. A zero-day used is a zero-day burned. That cost created behavioral constraints. Low-and-slow isn't just preference; for a capable actor protecting a valuable toolkit, it was the economically rational choice.

AI collapses that cost curve. When generating novel attack variants, rewriting signatures, and chaining techniques becomes cheap, the economic signal that enforced APT restraint disappears. The actor who previously ran a six-month campaign with careful operational security now has less reason for that care, because the replacement pipeline is cheaper. At the same time, commodity actors get faster, louder, and more capable.

The behavioral gap between disciplined APT and noisy criminal narrows. Infection rate accelerates across both populations simultaneously. More actors, lower barrier, faster iteration. The exposure window between a technique's development and defenders building detection for it shrinks. The window between a vulnerability's disclosure and its weaponization shrinks faster. Both trends are moving in the wrong direction for defenders, and they are moving together.

Risk is threat times vulnerability times impact. AI is moving the threat variable for both populations at the same time.


Herd immunity is already failing

Patches are inoculation. Unpatched systems are exposed population. Herd immunity in public health requires sufficient vaccination coverage to prevent an outbreak from sustaining itself. The analogy holds in vulnerability management, with one important difference: the threshold for digital herd immunity is effectively 100%. A single exposed host in the right position can be the pivot point for a network-wide compromise.

AI-assisted exploitation lowers the cost of finding and weaponizing the stragglers: the unpatched hosts, the end-of-life systems, the forgotten internet-facing instances. It doesn't require novel tradecraft. It requires finding the exposed population and moving faster than the defender's detection cycle. The stragglers have always existed. What changes is how cheaply and quickly an attacker can enumerate and exploit them at scale.

Nothing is ever truly obsolete on an unpatched surface.

Cybersecurity has an unhealthy fixation on what's new. Detection engineering optimizes for known patterns. Threat intelligence leads with emerging techniques. Vendors sell novelty. That fixation creates a blind spot that unsophisticated actors — and AI-assisted chains — exploit without effort. A CVE from 2017, rewritten to look unfamiliar, still works on an unpatched host. The attacker doesn't need sophistication. They need to be faster than your patching cycle and find you before your detection logic has seen the variant.

The more capable version combines legacy techniques with novel ones deliberately. AI lowers the cost of that combination. Defenders aren't looking for old and new chained together. Detection logic tuned to known patterns misses the hybrid. The fixation on novelty is the exploit.

Herd immunity as a strategic posture will fail. The exposed population is too large. Patching cycles are too slow in aggregate. Accelerating attacker economics widen the exploitation window faster than ecosystem-wide patch adoption can close it. Outbreak containment is the more honest priority. You cannot reliably protect the herd. You can design enclaves that survive an outbreak.

The threat model most organizations are running is wrong.

The standard framing is crown jewels protection: identify the most valuable assets, build concentric defenses around them, prioritize access controls. It's a reasonable starting point and an incomplete one, because destructive attackers don't always want the jewels. They want leverage. They want the thing whose absence forces a public statement, halts revenue, or breaks trust with customers.

In a retail organization, that's rarely the intellectual property. It's the payment processing infrastructure. The inventory management system. The payroll platform. These aren't the assets that show up at the top of a data classification exercise. They're the ones that, unavailable for 72 hours, bring operations to their knees.

Business continuity experts already understand this. A mature retail operation runs blackout drills: can stores process payments when the network goes down? The answer has to be yes, by design, before the incident — not improvised during one. Security organizations need the equivalent. Not just "where are the crown jewels" but "what does the operation look like when the thing we didn't think mattered stops working."

Enclave design is the architectural response: isolated, continuity-capable segments of critical operations built to limit blast radius and preserve the ability to function through a compromise. It doesn't prevent intrusion. It changes what an attacker can accomplish once inside, and it preserves options for defenders when the containment decision has to be made under pressure.


The threat intel trap

Threat intelligence is supposed to shorten detection time by giving defenders advance knowledge of attacker tools, techniques, and infrastructure. The theory is sound. The operational reality is more complicated.

Most threat intelligence originates from incident response engagements. IR teams develop detailed knowledge of attacker behavior, tooling, and indicators during active investigations. That knowledge eventually makes its way to threat intelligence products, sharing platforms, and public reporting. In my experience on active IR engagements, the lag between what the response team knows and what reaches public reporting is typically 14 to 60 days. The most actionable context — the full attack chain, the victim environment details that explain why specific techniques were used — is frequently locked behind attorney-client privilege. What reaches defenders publicly is often a stripped indicator list without the context that would make it operationally useful. Many defenders consuming that list don't realize it originated in an IR engagement at all, let alone one that concluded weeks ago.

Indicators of compromise have real value, but that value is narrow and time-bounded. An IP address associated with C2 infrastructure is useful during the window when the threat actor is using it. After that window closes, the indicator becomes noise: a hit on a decommissioned server that now hosts something benign, or an address reused by a different actor entirely. Matching IOCs without understanding the attack chain and the relevant time window generates false positives. False positives are not neutral. They consume analyst capacity — the same capacity needed to investigate real detections.

AI-generated threat intelligence accelerates this problem. Volume increases. Vetting quality decreases. The flood of low-confidence indicators creates the environment where high-confidence signals get buried. More intelligence, operationalized poorly, contributes negative value at scale. The defenders who navigate this best treat threat intel as a quality problem, not a coverage problem. Fewer indicators, richer context, matched to the threat window — that is a detection program. Everything else is noise with a feed subscription attached to it.


Detection first, containment by design, response last

The sequence matters because the failure modes are different at each stage.

Detection is where AI assistance generates the most near-term value. Alert creation, detection rule development, automated triage of high-volume low-fidelity alerts — these are tasks where the cost of an AI error is bounded and recoverable. A false positive costs analyst time. A missed detection costs more, but detection is a probabilistic game; no program catches everything, and the goal is improving signal-to-noise over time. Removing humans from the parts of detection work they perform worst — repetitive triage, pattern matching across high-volume logs — is achievable now and worth pursuing.

Containment is a different problem. Containment decisions are business decisions. Isolating a compromised host sounds like a technical action. In practice it may mean taking a production system offline, interrupting a business process, or triggering a customer-visible outage. The tradeoffs — how much operational disruption is acceptable to limit a particular blast radius — require business context that most automated systems don't have and that most organizations haven't codified in a form that could be communicated to one. Until organizations develop frameworks for expressing their risk tolerance and operational priorities in terms an automated system can act on, human judgment belongs in the containment decision loop.

Response and recovery sit further down the same problem. Don't automate what you haven't learned to do well manually.

The enclave design principle connects here. Containment is easier when the architecture supports it — when critical functions are isolated enough that taking one segment offline doesn't cascade. Organizations that have done the continuity design work before an incident find their containment options are broader. The ones that haven't discover their options during the incident, which is the worst time for that assessment.


The transition window is the problem

The long-term economics of this fight probably favor defenders, modestly. Defensive advantages compound at scale in ways attacker advantages don't. Threat intelligence shared across an industry is harder to evade than intelligence held by a single organization. Detection logic developed against a technique and distributed broadly limits the return on that technique investment. Collective patch pressure, coordinated disclosure, and shared defensive tooling all benefit from network effects attackers can't easily replicate.

None of that is available right now at the fidelity required to close the gap AI is opening.

The transition window is the period between attacker tooling maturing and defender tooling catching up. During that window, attacker economics are ahead of the curve. Tradecraft is cheaper. Exploitation is faster. Detection logic is behind. Threat intel quality is degrading. Automated response isn't ready.

The organizations that come through this window intact won't be the ones that matched attacker capabilities feature for feature. They'll be the ones that made the unsexy investments: aggressive patch cycles, enclave architecture, disciplined threat intel programs, and detection programs built for quality over coverage. Those investments don't generate press releases. They generate survivability.

Outbreak containment is not a permanent posture. It is what you do to stay in the fight long enough for the structural advantages to materialize.


References

  • Herrmann, D.S. A Practical Guide to Security Engineering and Information Assurance. Auerbach Publications, 2002. (Time-Based Security model, P > D+R)
  • Boyd, John R. "Patterns of Conflict." Unpublished briefing, 1986. (OODA loop)
  • Merritt, James. "Are We Ever Truly Obsolete?" Readiness Nation, 2024. (Technological overmatch vs. obsolescence framework)
  • MITRE ATT&CK Framework. https://attack.mitre.org (TTP taxonomy referenced implicitly throughout)
  • Verizon Data Breach Investigations Report, annual. (Threat actor population data and dwell time statistics)

No comments:

Post a Comment