Posts

Copy Fail / Dirty Frag: Learning the Lessons of Tomorrow Today

TL;DR: The past week was an AI-empowered security disruption that was built on capabilities already surpassed. Reflect on your Copy Fail and Dirty Frag response while it's fresh. Flag every extraordinary effort, every gap. Design tomorrow's response like you'll need to do this every day. You will. Copy Fail (CVE-2026-31431) is a Linux kernel local privilege escalation: an unprivileged local user to root, immediate, on all major distributions. CISA added it to their Known Exploited Vulnerabilities (KEV) catalog two days after disclosure, the agency's clearest signal that a vulnerability is being actively exploited and needs immediate attention, with a May 15 federal remediation deadline. Before that deadline closed, Dirty Frag dropped: a chained exploit (CVE-2026-43284, CVE-2026-43500) extending the same bug class, bypassing the Copy Fail mitigation entirely, public PoC, no patch at disclosure. Same capability. Not on the KEV catalog. Both were found using AI-assisted r...

Copy Fail: What Detection Engineers Actually Need to Know

  TL;DR: Your logging probably misses this one. Here's what to hunt and why getting to a real alert is harder than it should be. What the exploit actually does Copy Fail (CVE-2026-31431) is a logic flaw in authencesn , a kernel AEAD wrapper used by IPsec. The exploit binds an AF_ALG socket (the kernel's userspace crypto interface) to authencesn(hmac(sha256),cbc(aes)) , uses splice() (a syscall that moves file data between descriptors without copying) to feed the kernel's in-memory copy of a setuid binary into the crypto scatterlist, and triggers a decryption operation. Setuid binaries run as root regardless of who calls them. su and sudo are the common targets. A bug in authencesn writes 4 attacker-controlled bytes past the intended output boundary, landing in those in-memory pages. recvmsg() returns an error because the HMAC fails, but the write already happened. The exploit repeats this for each chunk of shellcode, then calls execve("/usr/bin/su") . T...

Intelligence Scarcity: Underserved Cybersecurity Problems for AI Innovators

Intelligence Scarcity: Underserved Cybersecurity Problems for AI Innovators TL;DR: AI investment in cybersecurity is crowding into triage, automation, and agentic response, all of which assume a foundation most enterprise environments don't have. The foundational problems that have resisted solution for 30 years are exactly where AI changes the economics, and the enterprises that need them are already your customers. As an advisor, part of what I do is help security organizations charter a vision of future capability: identifying what's worth building, what's worth buying, and what the market hasn't gotten around to selling yet. That last category is where this post lives, a practitioner's account of real problems, in real enterprise environments, that AI is well-positioned to solve and that nobody seems to be pitching in their visions of the future. The thesis is in the title. Intelligence scarcity was the actual blocker: not the security logic, which has been und...

Difficult Conversations: Your Detection Coverage Map Is a Lie

  TL;DR: Coverage maps derived from detection content libraries (or worse, vendor assertions) will never answer the questions stakeholders are actually asking. Adversary emulation and simulation are the answer people are looking for but aren't asking for. An application owner comes to you. They're responsible for a critical business system, they have a compliance discussion coming up, and they want a coverage map showing which MITRE ATT&CK techniques your detection program covers for their application. They've done their homework. They're not asking for a rubber stamp. They want to know if you'll actually catch something targeting their system. If you have a coverage map, it's already broken. If you don't, building one won't give them what they're asking for. Either way, the green squares won't answer the question. This conversation is about to get difficult, because the language of our industry equates security monitoring maturity with ...

Security operations: An infinite problem space isn't a staffing problem

  Security operations: An infinite problem space isn't a staffing problem TL;DR: If you're building for everyone, you're satisfying no one. Your detection program has infinite scope because nobody defined who it actually serves. Every hire, tool, and process layer added before answering that question accelerates the problem. Here is what your detection program is being asked to do simultaneously: Identify confirmed active threats with enough fidelity that incident response can act immediately Maintain a long-horizon behavioral corpus for insider risk investigation Demonstrate control coverage for every framework your compliance team is measured against Generate high-recall signal that gives threat hunters a starting point for hypothesis-driven investigation Map coverage to business risk so owners can answer "are we protected" in board language Confirm within 24 hours that any TTP mentioned in the news is either covered or in queue Intersect detection c...

The Honeymoon Rate: AI Echoing the Dot-Com Boom/Bust

The Honeymoon Rate: AI Echoing the Dot-Com Boom/Bust TL;DR: AI is real. Current AI pricing is not. The technology is new. The business model dynamics are not. Adopt aggressively, commit cautiously, and build everything to survive a provider change, a price correction, or the startup you depend on disappearing. We've seen this movie before In my conversations with enterprise leaders over the past year, I keep seeing two failure modes. Smart people, under board pressure to show AI adoption, are making fast commitments on unstable ground. They sign contracts, build dependencies, and defer the hard questions about what happens when the pricing changes. The pressure to show progress is producing decisions optimized for the next board deck, not for the next five years. Other organizations demand FedRAMP authorization from two-year-old startups, five-year support commitments from providers that won't be profitable for four, and compliance guarantees against regulations that haven...

Overmatch, Not Obsolescence: How to Think About AI and the Security Operations Fight

  Overmatch, Not Obsolescence: How to Think About AI and the Security Operations Fight TL;DR: AI gives attackers near-term advantage. The response is containment architecture, not capability matching. Long-term the economics favor defenders. Getting there is the problem. There are two ways to be wrong about AI and security. The first is panic: AI makes defenders obsolete, the game is over, buy something. The second is cheerleading: AI is the great equalizer, defenders finally have the tools to win. Both framings share the same flaw. They treat this as a question about technology instead of a question about competitive advantage in a specific operational environment, at a specific moment in time. The military has a cleaner vocabulary for this. Technological overmatch describes a condition where one capability dominates another in a given context. It is not the same as obsolescence. A weapon system isn't obsolete until it can no longer generate effects on the enemy. Until that ...