Sunday, April 12, 2026

For Me This Is Tuesday

 

For Me This Is Tuesday

Glasswing is a good answer. It's just not the whole answer.


Before you read this: Start with the Project Glasswing announcement and the Anthropic red team's Mythos preview post. For technical grounding on what AI-assisted vuln research actually looks like in practice, Nicholas Carlini's Black-Hat LLMs talk at [un]prompted 2026 is worth your time. Once you've absorbed those, the Three Buddy Problem episode on Mythos and Glasswing is the most candid practitioner reaction I've heard, including some useful cold water on the framing.


When Anthropic dropped the Glasswing announcement and the Mythos red team preview, the reactions in security circles landed roughly where they always do. Some practitioners dismissed it. The threat landscape hasn't fundamentally changed, the vulnerabilities being automated weren't new, the attacker had tools before the model did. Others went the other direction, accepting exponential growth projections across every risk domain as license to argue for infinite spend against an invincible adversary.

"For me this is Tuesday."

I heard a version of this from a defender shortly after the announcement. Proud, confident, self-assured. And they're not entirely wrong. The threat landscape hasn't changed in kind. For defenders already guarding against well-resourced adversaries, the capabilities Mythos demonstrates were present in human hands well before GPT-3.5.

But in my experience responding to enterprise destructive attacks, that level of certainty about your own defenses was almost a guarantee we were about to find severe compromises or severe deficiencies. Defenders who are actually contending with their real environment tend to be humble. They know their specific blindspots. They can name the gaps they haven't closed yet.

The "this is Tuesday" defender is right about the vulnerabilities. They're wrong about the time.

What Glasswing is actually solving

Glasswing is a coordinated effort to use Mythos-class AI to find and patch vulnerabilities in critical software before adversaries can exploit them, autonomously, at scale, across codebases that have survived decades of human review and millions of automated tests. A 27-year-old OpenBSD vulnerability. A 16-year-old FFmpeg flaw that automated tooling had hit five million times without catching. Linux kernel privilege escalation via chained zero-days. These are real findings, and patching them before adversaries exploit them is unambiguously good.

The implicit theory of defense is: find and fix vulnerabilities faster than attackers can weaponize them and defenders win. That logic is sound at the software layer. The problem is it addresses only one leg of the race, and not the leg that's currently losing fastest.

The actual new thing is velocity at the operational layer

AI-enabled attack chains don't primarily create new vulnerability classes. What they compress is the interval between access and impact, and that interval was already collapsing before Mythos.

  • 29 minutes: Average eCrime breakout time in 2025 (CrowdStrike). Fastest observed: 27 seconds.
  • 5 days: Median intrusion-to-ransomware in 2025, down from 9 days the year before (Sophos).
  • 89%: Year-over-year increase in AI-enabled adversary operations (CrowdStrike 2026 GTR).
  • ~70 minutes: Initial infection to enterprise-wide ransomware deployment in one documented case (M-Trends 2026).

Microsoft's RSAC 2026 briefing documented AI embedded across the full attack lifecycle: reconnaissance, credential-lure generation, deepfake-assisted initial access, automated persistence, and in some cases automated ransom negotiation. The threat intelligence loop was already too slow for the fastest attackers. AI acceleration doesn't break a healthy loop. It exposes one that was already broken.

Patching the OpenBSD vulnerability is necessary. It does nothing about the attacker who has already achieved initial access and is operating in your environment faster than your SOC can triage an alert.

The patch is also a signal

There's a tension in the Glasswing framing worth naming. Mythos demonstrably works on source code. Autonomous exploitation of compiled binaries without source access remains a harder, unsolved problem, and that's a real constraint on the threat model. But it understates something practitioners who've done patch diffing will recognize immediately: the patch release is itself a signal. The moment a vendor ships a fix, an attacker doesn't need the original source. They need the diff. Reverse engineering what a patch corrected and working backward to the exploitable pre-patch state has been standard offensive tradecraft for years. Mythos-class capability on the offensive side compresses how fast that window gets worked.

Defenders who want to benefit from Glasswing need to treat the resulting patches differently than routine patch Tuesday updates. The vulnerability disclosure and the exploitation window now potentially overlap. Organizations should verify they have the internal capability to apply Glasswing-sourced patches on an emergency cadence, independent of normal change management cycles. If you can't move faster than an attacker can read a diff, the defensive advantage Glasswing promises doesn't fully materialize.

The gap Glasswing doesn't address

Glasswing represents a genuine coordination model: industry, government, and open-source maintainers aligned around a shared defensive capability. That structure is exactly right. What doesn't yet exist is anything like it at the operational layer. AI-enabled detection and response that can match the speed of AI-enabled attack chains, with coordinated accountability baked in.

What I'd actually want to exist, and largely doesn't yet, is a structural separation between the organizations defending you and the organizations stress-testing that defense. An AI-enabled response capability that can take autonomous action at machine speed needs to be held accountable by something that can attack at the same speed. A vendor assessing its own detection coverage is a conflict of interest at the worst possible moment. That accountability structure has to be designed in, not discovered after an incident.

Most organizations aren't close to this. The harder problem upstream of tooling is decision authority. Tactical containment decisions that currently route through change advisory boards at 2am will lose a race against a 27-second breakout. The defenders who navigate the next phase won't just have better software. They'll have worked out how to delegate consequential decisions at machine speed to people who are empowered to own the outcomes.

Why the FUD framing is also wrong

Accepting exponential projections across every risk domain and using them to justify infinite spend is the mirror image of "this is Tuesday." Both guarantee the status quo. Leaders who receive ungrounded threat assessments will rationally defer the decisions we're asking them to make until something more actionable appears. Our credibility as advisors depends on giving specific, bounded risk guidance, not gesturing at a scary horizon.

There's also a structural problem neither framing addresses: cyber attacks still operate in a near-consequence-free environment for most threat actors. In physical space we aren't protected primarily through hardening. We're protected because people who want to harm us have to weigh the cost of being caught. Public policy investment in using the same AI capabilities to expose threat actors to legal consequences would do more systemic good than any amount of private defensive spend. That's a long game, but it's the honest frame for why defenders are running a fundamentally asymmetric race.

What a defensible posture actually requires

Glasswing is a serious effort by serious people and it deserves a serious response, which means neither dismissal nor panic. The practitioners I trust most share a common intuition: security bugs are dense, not sparse. The more you look, the more you find. The right design assumption is that bugs are present, lateral movement pathways exist, and your architecture needs to limit blast radius accordingly. Zero-trust segmentation is exactly right for this environment, not because it prevents compromise, but because it makes the compromise slower and more detectable.

The harder work is the operational and organizational layer Glasswing doesn't address. The organizations that come out ahead won't just have better patch cadence. They'll have worked through what it means to delegate real authority at real speed and built the accountability structures to match.


References: Anthropic, "Project Glasswing," anthropic.com, April 2026. Anthropic, "Assessing Claude Mythos Preview's Cybersecurity Capabilities," red.anthropic.com, April 7, 2026. CrowdStrike, "2026 Global Threat Report," February 24, 2026. Mandiant, "M-Trends 2026 Report," March 2026. Microsoft Security Blog, "Threat Actor Abuse of AI Accelerates," RSAC 2026, April 2, 2026. Sophos, "The State of Ransomware 2025." Verizon, "2025 Data Breach Investigations Report."

Saturday, April 11, 2026

Lab Notes: Claude Code Session Logs as a Forensic Artifact

 TL;DR;

Claude Code logs every agent action locally in structured JSONL transcripts. These are forensically valuable, generally unprotected, and your GRC and detection teams should know they exist.

Background

AI coding agents like Claude Code are becoming common in developer environments. Unlike a chat interface, these tools operate agentically — they execute bash commands, read and write files, and chain tool calls autonomously on behalf of the user. Users authorize this at session start, often broadly, and may not review every action taken.

This creates a non-repudiation problem. The user is responsible for agent actions, but awareness of specific actions may be limited or absent entirely. From a forensic and compliance standpoint that gap matters.

The Artifact

Claude Code writes a complete session transcript for every run to:

~/.claude/projects/<url-encoded-project-path>/sessions/<session-uuid>.jsonl
~/.claude/history.jsonl

Each record contains the timestamp, message type, tool name, exact command executed, full stdout/stderr, working directory, and token usage. This is not a summary — it is a full structured record of every action the agent took.

These logs exist by default. No configuration required.

Forensic Value

During triage, these transcripts can establish:

  • What commands were executed, in what order, and with what output
  • Which files were read or modified by the agent
  • Session start/end times and working directories
  • Whether the agent spawned subagents and what they did

The artifact is local, human-readable with basic JSON tooling, and does not require any cooperation from Anthropic or cloud infrastructure to collect.

The Problem

These logs have no integrity protection. There is no append-only mode, no tamper detection, and no access controls beyond standard filesystem permissions. An actor who has compromised a developer workstation can delete or modify them.

Recommendations

For DFIR, GRC and Detection Engineering teams:

  1. Add ~/.claude/projects/ and ~/.claude/history.jsonl to your endpoint forensic triage collections alongside shell history and other user-space artifacts
  2. Audit your AI tool inventory — Claude Code, Copilot, Cursor, and similar tools likely produce analogous artifacts. Verify what each logs and where
  3. Require that commercial and in-house AI agent deployments log agent actions with sufficient detail for post-incident review, and that those logs ship to a protected destination
  4. Baseline a SOC alert for deletion or bulk modification of agent log directories on developer endpoints — the signal-to-noise should be low and the fidelity high

Reference

Notebook for parsing Claude Code sessions into a forensic timeline: https://github.com/DFIR-DeRyke/dfir_oneoffs/blob/main/claude_timeline.ipynb

Show and Tell

A simple request in my lab executed 97 commands. The claude_timeline notebook was built to simplify human peer review of machine actions — here's a sample of what that output looks like:


Friday, June 7, 2019

Lab Notes: Persistence and Privilege Elevation using the Powershell Profile

TL;DR;

A recent ESET blog post mentions a persistence technique I wasn't previously aware of that is in use by the Turla APT group.  The technique leverages the PowerShell profile to sabotage PowerShell in a way that executes arbitrary code every time Powershell is launched, upon testing I've discovered this technique may also provide a low and slow vector to Domain Admin, and other privileged admin or services accounts by leveraging common flaws admin scripts, asset management systems, and enterprise detection and response tools. This post captures my observations working from Matt Nelson's 2014 blog post (Apologies to the researcher if there is prior art I'm unaware of at the time of this post)

Privilege Elevation - Local Admin to Sloppy Admin


Setup Requirements:


  1. In my testing, you need local admin rights to create the global profile
    1. $profile.AllUsersAllHosts 
    2. AKA C:\Windows\System32\WindowsPowerShell\v1.0\profile.ps1
  2. This does not bypass Execution Policy (check with Get-ExecutionPolicy).  
    1. If it's set to AllSigned or Restricted, not only will the code not execute; the end user might notice a suspicious error message reminding them of the execution policy. (By default a Window 10 endpoint is Restricted) 
  3. A privileged user or preferably an automated task that runs PowerShell on the 0wned box with elevated domain privileges is needed.  They also need to forget  to pass –NoProfile flag when launching it (which seems like just about everything and everybody in a large enterprise)  
Now any code you place in this global profile will be run by any user who launches PowerShell. We just decide what kind of PowerShell script we want our sloppy admin to execute, set our trap, and patiently wait. 

As a POC I used 1 line of code: 
Add-Content c:\windows\temp\test1.txt "$(Get-Date) Profile POC Launched by $(whoami)"

Within the hour a friendly enterprise asset management system ran my arbitrary code using SYSTEM, but with a phone call to IT and some trivial social engineering, this could have easily been one of the desktop admins.

Mitigation:

  1. Similar to detecting persistence in the startup folder, if you can audit file writes and modifications to C:\Windows\System32\WindowsPowerShell\v1.0\profile.ps1 you can alert on this in real time.  Most userbases will not be making frequent changes to this file, which should leave you with a low noise high fidelity alert
  2. If you need another reason to preach the gospel of a restrictive PowerShell execution policy this may be it. Unfortunately, if your admins are using it already good luck telling them they can't use PowerShell
  3. You can also audit to ensure any privileged accounts executing PowerShell on remote systems always invokes the –NoProfile command line argument

Persistence 



For persistence, things are much simpler. Aforementioned mitigations 1 and 2 still apply, but the only requirement is the lax execution policy.  Every user should have access to edit their own $profile and any code placed here will be executed anytime PowerShell is launched under that user context.

One Line POC:
Add-Content $profile "Invoke-Item C:\Windows\System32\calc.exe"

For detection, we need to monitor a few additional file locations, but the alert volume should still be manageable:

  • C:\Windows\System32\WindowsPowerShell\v1.0\profile.ps1
  • $Home\[My ]Documents\WindowsPowerShell\Profile.ps1
  • $Home\[My ]Documents\WindowsPowerShell\Microsoft.PowerShell_profile.ps1
  • $PsHome\Profile.ps1
  • $PsHome\Microsoft.PowerShell_profile.ps1
  • $Home\[My ]Documents\PowerShell\Profile.ps1
  • $Home\[My ]Documents\PowerShell\Microsoft.PowerShell_profile.ps1


Resources:

  1. Microsoft Documentation On Powershell Profiles https://docs.microsoft.com/en-us/powershell/module/microsoft.powershell.core/about/about_profiles?view=powershell-6
  2. Abusing Powershell Profiles https://enigma0x3.net/2014/06/16/abusing-powershell-profiles/
  3. Turla Powershell Usage https://www.welivesecurity.com/2019/05/29/turla-powershell-usage/

Tuesday, February 20, 2018

Asking Questions. A STORY FOR MOST PEOPLE

[The following is an excerpt from The Manufacturer and Builder Volume 0001 Issue 1 (January 1869).  I came across this in my woodworking research, and want to preserve it here because its centrally relevant across all of the domains I'm interested in.]

Once there was a young man whose name was John.  That is to say, not knowing what his name was, and taking all the chances.  I think it was probably John.  For the same reason I take the liberty of presuming that his other name was Smith.  Having previously been a boy, like the generality of young men. John had learned during that period an art which was almost the only thing that distinguished him from other Johns.  He knew how to ask questions; and the object of this brief sketch of his life is to show how he acquired this accomplishment, and what came of it.

He used to say that his father, who was a farmer gave him the first lessons in asking questions; and putting together what his father told him at different times, he compiled a set of rules on the subject which he showed to a Friend the other day, neatly written on the flyleaves of his pocket-diary.  They were headed,

The Art of Asking Questions

  1. Every man knows something that I do not know.
  2. Every thing, living or inanimate, has something to tell me that I do not know.
  3. It is better to ask questions of things than of men; but its better to ask men than not to ask at all.
  4. Lazy questions, impertinent questions, and conceited questions are the greatest of nuisances.  They are like conundrums without any answers - they tend to make men dislike all questions; and when asked of nature, they get no response from her whatever.
  5. Asking questions is of no use, if a man forgets the replies.
  6. People like to be asked, in the proper time and manner, concerning matters which they understand.  When they refuse to satisfy such inquiries, it is generally because the matter is not their business, or they think it is none of mine.
  7. Remembering a thing is not necessarily be living it.  I will remember whatever is told to me by men or by nature ; but I will bear in mind that men may be mistaken, or that I myself may misunderstand both words and facts.
  8. The way to remember the answer to any question is to associate it in the mind with other answers connected with the same subject.  It is well, therefore, to follow one subject, if possible, until sufficient has been learned about it to be easily remembered; for the more one knows the more one can remember, while isolated facts soon get lost.  As my father said, "Wholesale stores are the easiest to keep in order."
  9. Never be ashamed not to know, but be ashamed not to learn.
  10. Never pretend to know ; as for pretending to be ignorant, there is no danger of that, since all men are ignorant.  Even in asking questions concerning the subjects which I have most carefully studied, I may truly say I desire to learn ; for I may have made mistakes or omissions in my study which another might correct.  As my father said, "Judge Pickerell spent forty years in collecting coins, and found at last a coin that was not in his collection in the hands of a beggar, who had that and nothing else."
  11. As my father said, "Every stone is a diamond unless it is not; therefore every stone may be a diamond, until you know it is not ; and in finding out that it is not a diamond, you may discover that it is something more useful."
  12. As my father said, "A man who is forever asking and never answering is like the swamp in our forty-acre lot.  You can't raise crops without rain on one hand and drainage on the other."


From the foregoing it will be seen that the elder Smith was a man of sense. Certainly his neighbors thought the same thing.  Frequently the judge or the parson or the doctor would come riding by his farm, and the plain farmer would leave his plow and sit upon the rail fence, under the shadow of the great elm, whittling a stick, while they talked with him on various matters of politics or social management.  It was noticeable that he seldom asked other people for their opinions, and they soon learned to be a little shy of offering any; for he was sure to reply, "Indeed, what makes you think so?" and that is a troublesome way of putting it. On the other hand, they were always anxious to get his opinions in exchange for their facts.  As the judge remarked, "Farmer Smith's views are his own, and they are worth hearing.  He doesn't think he is obliged to say something on every subject, whether he understands it or not; and when he does speak, he tells what he knows."

He was always particular to give the source of his knowledge.  He would say, "I have observed," or "I have read" or "As far as I can judge, it seems to me," and the like.  And when others contradicted him, he used to say, "I am very glad to hear your experience on that point, because it is different from mine. I will make note of that."  After he died, they found among his papers a good many notes of this kind with the names of those who had given the information, and marked in the margin with different signs, indicated, according to a method of his own, which he never told any body, the degree of reliance which he thought was to be placed in the authors or their communications.

It must not be supposed that he gave his son John the above set of rules all at once, like a catechism.  On the contrary, as I before hinted, he dropped them in the form of remarks, from time to time on appropriate occasions.  On some of these occasions I shall give examples in the next chapter.  It may be thought that I am writing the life of the wrong Smith. In fact the father and not the son would be my hero, but for the fact that John's greater opportunities, and advantages enabled him to make a more brilliant career outwardly; and the full fruit of the old man's system, as well as the reward for his patience and a good sense, was realized in the success of his son.  After all, however, if health and virtue and good nature and a well-trained mind be success, then old Smith achieved it.

Tuesday, April 4, 2017

Setting Static IP Addresses In VMware Fusion

During malware analysis, I frequently need to flip my analysis VM's between host-only and NAT to alternate between interacting with suspicious websites and man in the middling network traffic with various tools REMnux to simulate command and control traffic without tipping of the malicious operator.

To avoid tinkering with IP settings on my analysis guest machines, I've taken to manually editing the VMware fusion DHCP configurations.  I'm posting this here to help me commit the configuration to long term memory - mainly which files I need to edit - in the hopes that it saves me some googling when updates periodically wipeout this file.  Maybe it will be useful to someone else too.

My configuration (default) for NAT is vmnet 8.
 atom "/Library/Preferences/VMware Fusion/vmnet8/dhcpd.conf"

My configuration (default) for host-only is vmnet 1.
atom "/Library/Preferences/VMware Fusion/vmnet8/dhcpd.conf"

Using the standard dhcpd.conf format, append your static IP assignments to the end of the file.  Static assignments must be outside the DHCP pool declared earlier in the DHCP.conf

####### VMNET DHCP Configuration. End of "DO NOT MODIFY SECTION" #######
host REMnuxVM {
    hardware ethernet 00:0C:DE:AD:B3:EF;
    fixed-address  172.16.59.20;
option domain-name-servers 0.0.0.0;
option domain-name "REMnuxVM";
}
host AnalysisVM {
    hardware ethernet 00:0C:0B:AD:F0:0D;
    fixed-address  172.16.59.30;
option domain-name-servers 172.16.59.20;
option domain-name "AnalysisVM";
option routers                  172.16.59.20;
    option subnet-mask              255.255.255.0;


}


Restart VMware fusion, cycle your guest VM adapters and your Analysis VM will automagically be routing its traffic to REMnux for tampering.  Now you flip from NAT mode to host-only mode where can fakedns, inetsim, and accept-all-ips to your heart's content without mucking around with guest network adaptor settings.  Reverting snapshots is now a breeze. 

sudo /Applications/VMware\ Fusion.app/Contents/Library/vmnet-sniffer -e -w Test.pcap vmnet1
len   84 src 00:0c:29:3d:32:3a dst 00:0c:29:ca:df:05 IP src 172.16.59.30    dst 172.16.59.20     UDP src port 64004 dst port 53
len  100 src 00:0c:29:ca:df:05 dst 00:0c:29:3d:32:3a IP src 172.16.59.20    dst 172.16.59.30    UDP src port 53 dst port 64004

Another perk is that static IP's greatly simplify your capture filters.
tshark -i vmnet1 -f "host 172.16.59.30"
Capturing on 'vmnet1'
    1   0.000000 172.16.59.30 → 172.16.59.20  DNS 84 Standard query 0x0001 PTR 20.59.16.172.in-addr.arpa
    2   0.000298  172.16.59.20 → 172.16.59.30 DNS 100 Standard query response 0x0001 PTR 20.59.16.172.in-addr.arpa A 172.16.59.2
    3   0.012761 172.16.59.30 → 172.16.59.20  DNS 85 Standard query 0x0002 A google.com.AnalysisVM
    4   0.012987  172.16.59.20 → 172.16.59.30 DNS 101 Standard query response 0x0002 A google.com.AnalysisVM A 172.16.59.20