For Me This Is Tuesday
Glasswing is a good answer. It's just not the whole answer.
Before you read this: Start with the Project Glasswing announcement and the Anthropic red team's Mythos preview post. For technical grounding on what AI-assisted vuln research actually looks like in practice, Nicholas Carlini's Black-Hat LLMs talk at [un]prompted 2026 is worth your time. Once you've absorbed those, the Three Buddy Problem episode on Mythos and Glasswing is the most candid practitioner reaction I've heard, including some useful cold water on the framing.
When Anthropic dropped the Glasswing announcement and the Mythos red team preview, the reactions in security circles landed roughly where they always do. Some practitioners dismissed it. The threat landscape hasn't fundamentally changed, the vulnerabilities being automated weren't new, the attacker had tools before the model did. Others went the other direction, accepting exponential growth projections across every risk domain as license to argue for infinite spend against an invincible adversary.
"For me this is Tuesday."
I heard a version of this from a defender shortly after the announcement. Proud, confident, self-assured. And they're not entirely wrong. The threat landscape hasn't changed in kind. For defenders already guarding against well-resourced adversaries, the capabilities Mythos demonstrates were present in human hands well before GPT-3.5.
But in my experience responding to enterprise destructive attacks, that level of certainty about your own defenses was almost a guarantee we were about to find severe compromises or severe deficiencies. Defenders who are actually contending with their real environment tend to be humble. They know their specific blindspots. They can name the gaps they haven't closed yet.
The "this is Tuesday" defender is right about the vulnerabilities. They're wrong about the time.
What Glasswing is actually solving
Glasswing is a coordinated effort to use Mythos-class AI to find and patch vulnerabilities in critical software before adversaries can exploit them, autonomously, at scale, across codebases that have survived decades of human review and millions of automated tests. A 27-year-old OpenBSD vulnerability. A 16-year-old FFmpeg flaw that automated tooling had hit five million times without catching. Linux kernel privilege escalation via chained zero-days. These are real findings, and patching them before adversaries exploit them is unambiguously good.
The implicit theory of defense is: find and fix vulnerabilities faster than attackers can weaponize them and defenders win. That logic is sound at the software layer. The problem is it addresses only one leg of the race, and not the leg that's currently losing fastest.
The actual new thing is velocity at the operational layer
AI-enabled attack chains don't primarily create new vulnerability classes. What they compress is the interval between access and impact, and that interval was already collapsing before Mythos.
- 29 minutes: Average eCrime breakout time in 2025 (CrowdStrike). Fastest observed: 27 seconds.
- 5 days: Median intrusion-to-ransomware in 2025, down from 9 days the year before (Sophos).
- 89%: Year-over-year increase in AI-enabled adversary operations (CrowdStrike 2026 GTR).
- ~70 minutes: Initial infection to enterprise-wide ransomware deployment in one documented case (M-Trends 2026).
Microsoft's RSAC 2026 briefing documented AI embedded across the full attack lifecycle: reconnaissance, credential-lure generation, deepfake-assisted initial access, automated persistence, and in some cases automated ransom negotiation. The threat intelligence loop was already too slow for the fastest attackers. AI acceleration doesn't break a healthy loop. It exposes one that was already broken.
Patching the OpenBSD vulnerability is necessary. It does nothing about the attacker who has already achieved initial access and is operating in your environment faster than your SOC can triage an alert.
The patch is also a signal
There's a tension in the Glasswing framing worth naming. Mythos demonstrably works on source code. Autonomous exploitation of compiled binaries without source access remains a harder, unsolved problem, and that's a real constraint on the threat model. But it understates something practitioners who've done patch diffing will recognize immediately: the patch release is itself a signal. The moment a vendor ships a fix, an attacker doesn't need the original source. They need the diff. Reverse engineering what a patch corrected and working backward to the exploitable pre-patch state has been standard offensive tradecraft for years. Mythos-class capability on the offensive side compresses how fast that window gets worked.
Defenders who want to benefit from Glasswing need to treat the resulting patches differently than routine patch Tuesday updates. The vulnerability disclosure and the exploitation window now potentially overlap. Organizations should verify they have the internal capability to apply Glasswing-sourced patches on an emergency cadence, independent of normal change management cycles. If you can't move faster than an attacker can read a diff, the defensive advantage Glasswing promises doesn't fully materialize.
The gap Glasswing doesn't address
Glasswing represents a genuine coordination model: industry, government, and open-source maintainers aligned around a shared defensive capability. That structure is exactly right. What doesn't yet exist is anything like it at the operational layer. AI-enabled detection and response that can match the speed of AI-enabled attack chains, with coordinated accountability baked in.
What I'd actually want to exist, and largely doesn't yet, is a structural separation between the organizations defending you and the organizations stress-testing that defense. An AI-enabled response capability that can take autonomous action at machine speed needs to be held accountable by something that can attack at the same speed. A vendor assessing its own detection coverage is a conflict of interest at the worst possible moment. That accountability structure has to be designed in, not discovered after an incident.
Most organizations aren't close to this. The harder problem upstream of tooling is decision authority. Tactical containment decisions that currently route through change advisory boards at 2am will lose a race against a 27-second breakout. The defenders who navigate the next phase won't just have better software. They'll have worked out how to delegate consequential decisions at machine speed to people who are empowered to own the outcomes.
Why the FUD framing is also wrong
Accepting exponential projections across every risk domain and using them to justify infinite spend is the mirror image of "this is Tuesday." Both guarantee the status quo. Leaders who receive ungrounded threat assessments will rationally defer the decisions we're asking them to make until something more actionable appears. Our credibility as advisors depends on giving specific, bounded risk guidance, not gesturing at a scary horizon.
There's also a structural problem neither framing addresses: cyber attacks still operate in a near-consequence-free environment for most threat actors. In physical space we aren't protected primarily through hardening. We're protected because people who want to harm us have to weigh the cost of being caught. Public policy investment in using the same AI capabilities to expose threat actors to legal consequences would do more systemic good than any amount of private defensive spend. That's a long game, but it's the honest frame for why defenders are running a fundamentally asymmetric race.
What a defensible posture actually requires
Glasswing is a serious effort by serious people and it deserves a serious response, which means neither dismissal nor panic. The practitioners I trust most share a common intuition: security bugs are dense, not sparse. The more you look, the more you find. The right design assumption is that bugs are present, lateral movement pathways exist, and your architecture needs to limit blast radius accordingly. Zero-trust segmentation is exactly right for this environment, not because it prevents compromise, but because it makes the compromise slower and more detectable.
The harder work is the operational and organizational layer Glasswing doesn't address. The organizations that come out ahead won't just have better patch cadence. They'll have worked through what it means to delegate real authority at real speed and built the accountability structures to match.
References: Anthropic, "Project Glasswing," anthropic.com, April 2026. Anthropic, "Assessing Claude Mythos Preview's Cybersecurity Capabilities," red.anthropic.com, April 7, 2026. CrowdStrike, "2026 Global Threat Report," February 24, 2026. Mandiant, "M-Trends 2026 Report," March 2026. Microsoft Security Blog, "Threat Actor Abuse of AI Accelerates," RSAC 2026, April 2, 2026. Sophos, "The State of Ransomware 2025." Verizon, "2025 Data Breach Investigations Report."
