Posts

Intelligence Scarcity: Underserved Cybersecurity Problems for AI Innovators

Intelligence Scarcity: Underserved Cybersecurity Problems for AI Innovators TL;DR: AI investment in cybersecurity is crowding into triage, automation, and agentic response, all of which assume a foundation most enterprise environments don't have. The foundational problems that have resisted solution for 30 years are exactly where AI changes the economics, and the enterprises that need them are already your customers. As an advisor, part of what I do is help security organizations charter a vision of future capability: identifying what's worth building, what's worth buying, and what the market hasn't gotten around to selling yet. That last category is where this post lives, a practitioner's account of real problems, in real enterprise environments, that AI is well-positioned to solve and that nobody seems to be pitching in their visions of the future. The thesis is in the title. Intelligence scarcity was the actual blocker: not the security logic, which has been und...

Difficult Conversations: Your Detection Coverage Map Is a Lie

  Difficult Conversations: Your Detection Coverage Map Is a Lie TL;DR: Coverage maps derived from detection content libraries (or worse, vendor assertions) will never answer the questions stakeholders are actually asking. Adversary emulation and simulation are the answer people are looking for but aren't asking for. An application owner comes to you. They're responsible for a critical business system, they have a compliance discussion coming up, and they want a coverage map showing which MITRE ATT&CK techniques your detection program covers for their application. They've done their homework. They're not asking for a rubber stamp. They want to know if you'll actually catch something targeting their system. If you have a coverage map, it's already broken. If you don't, building one won't give them what they're asking for. Either way, the green squares won't answer the question. This conversation is about to get difficult, because the lang...

Security operations: An infinite problem space isn't a staffing problem

  Security operations: An infinite problem space isn't a staffing problem TL;DR: If you're building for everyone, you're satisfying no one. Your detection program has infinite scope because nobody defined who it actually serves. Every hire, tool, and process layer added before answering that question accelerates the problem. Here is what your detection program is being asked to do simultaneously: Identify confirmed active threats with enough fidelity that incident response can act immediately Maintain a long-horizon behavioral corpus for insider risk investigation Demonstrate control coverage for every framework your compliance team is measured against Generate high-recall signal that gives threat hunters a starting point for hypothesis-driven investigation Map coverage to business risk so owners can answer "are we protected" in board language Confirm within 24 hours that any TTP mentioned in the news is either covered or in queue Intersect detection c...

The Honeymoon Rate: AI Echoing the Dot-Com Boom/Bust

The Honeymoon Rate: AI Echoing the Dot-Com Boom/Bust TL;DR: AI is real. Current AI pricing is not. The technology is new. The business model dynamics are not. Adopt aggressively, commit cautiously, and build everything to survive a provider change, a price correction, or the startup you depend on disappearing. We've seen this movie before In my conversations with enterprise leaders over the past year, I keep seeing two failure modes. Smart people, under board pressure to show AI adoption, are making fast commitments on unstable ground. They sign contracts, build dependencies, and defer the hard questions about what happens when the pricing changes. The pressure to show progress is producing decisions optimized for the next board deck, not for the next five years. Other organizations demand FedRAMP authorization from two-year-old startups, five-year support commitments from providers that won't be profitable for four, and compliance guarantees against regulations that haven...

Overmatch, Not Obsolescence: How to Think About AI and the Security Operations Fight

  Overmatch, Not Obsolescence: How to Think About AI and the Security Operations Fight TL;DR: AI gives attackers near-term advantage. The response is containment architecture, not capability matching. Long-term the economics favor defenders. Getting there is the problem. There are two ways to be wrong about AI and security. The first is panic: AI makes defenders obsolete, the game is over, buy something. The second is cheerleading: AI is the great equalizer, defenders finally have the tools to win. Both framings share the same flaw. They treat this as a question about technology instead of a question about competitive advantage in a specific operational environment, at a specific moment in time. The military has a cleaner vocabulary for this. Technological overmatch describes a condition where one capability dominates another in a given context. It is not the same as obsolescence. A weapon system isn't obsolete until it can no longer generate effects on the enemy. Until that ...

How Birds Live to Talk About It

  How Birds Live to Talk About It Most mornings I walk my kids to school. Ten minutes, same route, same trees, same stretch of sky. We talk about everything, but mostly the plants and animals we see along the way. The birds became part of that conversation several years ago — my kids have been to Craig Caudill's nature observation classes with me, so we're not starting from scratch on the walk. We're comparing notes. It's ten minutes where nobody is looking at a screen. I've come to think of it as the most useful part of the day. The forest isn't quiet. You are the noise. When most people walk into the woods, they notice how quiet it is. That silence feels like the natural state. It isn't. What you're hearing is a broadcast interruption. The birds went quiet because you showed up. Jon Young, in What the Robin Knows , describes what he calls the "language of the birds" — five distinct vocalizations that function less like music and more l...

Autonomic Security: Stop Waiting for AI to Save Your SOC

  Autonomic Security: Stop Waiting for AI to Save Your SOC The Adversarial Podcast's RSA episode is worth your time — CISOs talking candidly about autonomic security and where the industry needs to go. It got me thinking: if our CISOs are ready to have this conversation, how do we get our SOCs ready to meet the challenge? This is my take on that question. What autonomic security actually means Your autonomic nervous system keeps you alive without asking permission. Heart rate, immune response, reflexes — they don't wait for a conscious decision. They execute on signal. Autonomic security is the same idea: security responses that execute on policy without requiring a human decision at the moment they fire. This is not the same as autonomous security — AI making novel judgment calls in novel situations. Autonomic security executes well-defined responses to well-understood conditions. The question isn't whether the AI is ready. It's whether your organization is str...