Monday, April 13, 2026

Foundations: Know What You Own

Foundations: Know What You Own

Part 1 of the Foundations Series

Basic doesn't mean easy. It means fundamental and foundational. If we neglect our foundation our whole architecture crumbles.

TL;DR

Asset inventory is the most regulated, most recommended, and most under-built capability in security. Every framework tells you to do it. Almost nobody builds one that can actually support the weight of what comes next. This post walks through what a real asset inventory looks like at each stage of maturity, why the compliance version falls short, and where this needs to go as our environments fill up with non-human actors.


The Eye-Roll

If you've ever sat in a room where a regulator or auditor stressed the importance of maintaining an asset inventory, you've probably seen a senior practitioner's eyes glaze over. I've watched it happen with one of the best CISOs I've worked with. He's pushed mature security tooling into previously undefended M&A networks and found active breaches in the process. He doesn't need to be sold on asset inventory. He needs the recommendation to stop being so thin.

His frustration isn't that the guidance is wrong. It's that "maintain a list" doesn't begin to describe what he actually built, or what it took to make that list useful under pressure. The regulation gets people to the starting line. Everything after that is on you.

This post is an attempt to lay out the full distance. If you've been doing this long enough that the recommendation feels obvious, good. I'm putting structure around what you already know. If you're earlier in the journey, this is the map.

Why This Is Regulated

Every major security framework starts in the same place: you can't protect what you don't know about.

That's not a bumper sticker. It's the reason initial intrusions so often happen on the systems nobody was watching. The box that didn't get the EDR agent. The server a third party stood up and forgot about. The subnet that came over in an acquisition and never got folded into the security program. An unmonitored asset is a hiding place where an adversary can persist, and you can't monitor what isn't in your inventory.

This is why regulators care. They're trying to get organizations to the starting line. The problem is that most compliance implementations stop there.

The Compliance Version vs. The Operational Version

A compliance-driven asset inventory is a list. It has hostnames and maybe IP addresses. It gets updated when someone remembers to update it. It lives in a spreadsheet. It satisfies the auditor. In practice it's often not even one list: it's several lists in separate systems that don't talk to each other, each maintained by a different team with a different update cadence and a different definition of "asset."

That's crawl. That's the floor. And for a lot of organizations, that's where it stays, because the regulation doesn't describe what comes after.

The frustration practitioners feel with this isn't about the regulation being wrong. It's that a checkbox implementation doesn't serve as a foundation you can build on. It's a static artifact that answers one question ("do we have a list?") and can't answer any of the questions that actually matter during an incident, a vulnerability triage, or a coverage review.

Crawl, Walk, Run, Fly

Here's how I think about the maturity progression. Each level unlocks specific capabilities the one below it can't support.

Crawl

A list exists. You can count your hardware assets. You can hand it to an auditor and check the box.

What you can answer: "How many assets do we have?" (approximately)

What you can't answer: almost everything else.

Walk

The list is actively maintained. Every asset has an owner. During an incident, you can answer: who owns this system, what does it do, and who do I call. You're tracking lifecycle now, not just current state. When something gets decommissioned, you know when it happened and who did it. You know what you own now and what you've owned in the past.

This matters more than people think. If a corporate-branded laptop shows up on eBay, can you determine whether it was properly wiped and retired through your process? Or did it just vanish from the network one day and nobody followed up? That's an inventory maturity question, not a hardware disposal question.

At this level the inventory is a shared map. Incident responders and leadership are working from the same picture. You can scope lateral movement risk. You can identify system owners for notification. You're not running a scavenger hunt during a crisis.

Run

This is where the concept of "asset" expands well past hardware.

You're tracking user and non-user accounts. Service accounts. Identities. Virtual instances across cloud providers. Software bills of materials. And increasingly, AI agents: autonomous software actors that hold credentials and access systems on behalf of users or the organization.

Here's a number that should bother you: the average enterprise currently has an 82-to-1 ratio of non-human identities to human ones. That ratio is accelerating. Most asset inventories don't account for any of it.

At the run level your inventory also knows which security products have visibility on each asset and when they last checked in. First observed, last observed, by which tool. Coverage gaps are visible, not assumed. You can answer the question: if 15% of my assets quietly fell out of coverage, would I even notice?

That question is scarier than it sounds. Telemetry loss is usually noisy. Coverage erosion is silent.

Exposure surface is mapped. You know what's internet-facing, what's in a trusted enclave, what's reachable from where. Data classification is overlaid from DLP observations. Vulnerability management data is integrated where possible, though patch state and network topology remain ephemeral and hard to fully enumerate. That's worth naming honestly rather than pretending it's solved.

Fly

A unified, cross-environment source of truth. On-prem, AWS, Azure, GCP reconciled into one queryable system. The inventory is no longer a reference document. It's the decisioning layer underneath everything else.

Vulnerability prioritization is exposure-informed: an RCE in a trusted enclave with no inbound exposure is a different conversation than the same CVE on internet-facing attack surface. That decision requires deep asset knowledge that a CVSS score alone can't provide.

The inventory is structured for machine consumers, not just human ones. This is the part most people haven't thought about yet. As agentic response systems mature, they will need to query asset context at machine speed to make triage and containment decisions. If your inventory is a spreadsheet or a CMDB that requires a human to interpret, you've built a ceiling into your program that blocks the next generation of capability.

The Integration Problem

Here's a structural issue that nobody caused on purpose but everybody has to solve.

If you have workloads across on-prem infrastructure, AWS, Azure, and GCP, and no way to reconcile what lives where, how do you make informed decisions about coverage or risk? You can't. You're working from fragments.

SIEM and EDR platforms each maintain their own view of your environment. They're optimized for their own ecosystem, not for giving you a unified picture. The result is multiple partial inventories with no central authority. This isn't a vendor problem to complain about. It's an integration problem the customer owns. But it needs to be named, because it's the default state for most organizations and it undermines every capability that depends on knowing what you have.

Some gut-check questions:

If you assume you threw EDR on all your system builds but aren't running reports on agent health, you're in for a rude awakening. If you lost visibility to 10 or 20 percent of your assets in the estate, would you even notice? If you don't have a real asset count, how do you know how many agents are supposed to be there?

Non-Human Identities and AI Agents

The 82:1 ratio I mentioned earlier is just service accounts, API keys, tokens, and machine identities that already exist in most enterprises. AI agents are a new category on top of that.

These aren't hypothetical. Organizations are deploying autonomous agents that interact with production systems, make decisions, and hold real credentials. OWASP published the Agentic Top 10 in December 2025, and three of the top four risks center on identity, delegated trust, and tool access. The through-line to asset inventory is direct: agents mostly amplify existing vulnerabilities rather than creating new ones. If your inventory foundation is weak, agents inherit and multiply that weakness at machine speed.

An agent needs the same inventory rigor as any other identity. Who created it. What it can access. What tools and data sources it connects to. Who is accountable for its actions. When it should be decommissioned. If you can't answer those questions about your service accounts today, you definitely can't answer them about your AI agents. And the agent population is growing faster than the service account population ever did.

Where This Lands

The structure and quality of your asset inventory determines what you can build on top of it. Today that's human decision-making during incidents and vulnerability prioritization. Tomorrow it's autonomous systems that need structured, queryable context to operate safely.

Every framework starts with "know what you own." None of them describe the full journey. The regulator is trying to get you to the starting line. The rest of the distance is yours to cover.

Build the foundation like you're going to put weight on it. Because you are.


Appendix: Regulatory Landscape

This section is reference material for anyone who wants to trace the requirements back to their source.

NIST CSF 2.0 (ID.AM): The Identify function's Asset Management category covers hardware inventories (ID.AM-01), software and services (ID.AM-02), network data flows (ID.AM-03), supplier services (ID.AM-04), asset prioritization by criticality (ID.AM-05), and data inventories (ID.AM-07). The underlying NIST 800-53 control is CM-8, which at higher impact levels requires automated mechanisms for maintaining complete, accurate inventories and detecting unauthorized components. Framework-level guidance: tells you what to inventory but stays technology-neutral on how.

CIS Controls v8.1 (Controls 1 & 2): The most prescriptive of the major frameworks. Requires active management of all enterprise assets including those in cloud environments and those not under direct enterprise control. Defines six asset classes: Devices, Software, Data, Users, Network, and Documentation. Implementation Groups (IG1 through IG3) provide a maturity progression that loosely maps to crawl-walk-run. CIS also provides the clearest articulation of the adversary's perspective: attackers have demonstrated the ability, patience, and willingness to inventory and control enterprise assets at large scale to support their own objectives.

ISO 27001:2022 (Annex A 5.9, 5.10, 5.11): Requires organizations to develop and maintain an inventory of information and other associated assets, including assigned owners. Unique among major frameworks in explicitly requiring lifecycle tracking from creation through processing, storage, transmission, deletion, and destruction. Also addresses asset return upon termination of employment or contract (A.5.11), making it the only major framework that speaks directly to the decommissioning and disposal scenario.

PCI DSS v4.0 (Requirements 12.5.1, 2.4, 4.2.1.1, 6.3.2, 6.4.3, 9.9.1): Takes a scoping-first approach. Every system that stores, processes, or transmits cardholder data is in scope, along with anything on the same network without segmentation controls. PCI v4.0 expanded inventory requirements to include certificate inventories, bespoke and third-party software component inventories, and payment page script inventories. As of April 2025, organizations must also perform risk assessments on assets approaching end-of-life. PCI demonstrates the trajectory of what "asset inventory" means over time: it started as a hardware list and now covers certificates, code components, and scripts.

CISA BOD 23-01: The most operationally aggressive directive. Requires all Federal Civilian Executive Branch agencies to perform automated asset discovery every 7 days and initiate vulnerability enumeration every 14 days using privileged credentials. Demands on-demand asset discovery and vulnerability enumeration within 72 hours of a CISA request, with results due within 7 days. CISA explicitly states that continuous and comprehensive asset visibility is a basic precondition for managing cybersecurity risk. This directive comes closest to demanding queryable, responsive asset data rather than a static document updated quarterly.

The pattern: Every framework begins with the same premise. They all stop at different points on the maturity curve. NIST and ISO describe what should be inventoried. CIS adds how often and how to validate. PCI expands what counts as an asset. CISA BOD 23-01 pushes toward operational tempo and queryability. None of them reach the fly state described in this post.

Sunday, April 12, 2026

For Me This Is Tuesday

 

For Me This Is Tuesday

Glasswing is a good answer. It's just not the whole answer.


Before you read this: Start with the Project Glasswing announcement and the Anthropic red team's Mythos preview post. For technical grounding on what AI-assisted vuln research actually looks like in practice, Nicholas Carlini's Black-Hat LLMs talk at [un]prompted 2026 is worth your time. Once you've absorbed those, the Three Buddy Problem episode on Mythos and Glasswing is the most candid practitioner reaction I've heard, including some useful cold water on the framing.


When Anthropic dropped the Glasswing announcement and the Mythos red team preview, the reactions in security circles landed roughly where they always do. Some practitioners dismissed it. The threat landscape hasn't fundamentally changed, the vulnerabilities being automated weren't new, the attacker had tools before the model did. Others went the other direction, accepting exponential growth projections across every risk domain as license to argue for infinite spend against an invincible adversary.

"For me this is Tuesday."

I heard a version of this from a defender shortly after the announcement. Proud, confident, self-assured. And they're not entirely wrong. The threat landscape hasn't changed in kind. For defenders already guarding against well-resourced adversaries, the capabilities Mythos demonstrates were present in human hands well before GPT-3.5.

But in my experience responding to enterprise destructive attacks, that level of certainty about your own defenses was almost a guarantee we were about to find severe compromises or severe deficiencies. Defenders who are actually contending with their real environment tend to be humble. They know their specific blindspots. They can name the gaps they haven't closed yet.

The "this is Tuesday" defender is right about the vulnerabilities. They're wrong about the time.

What Glasswing is actually solving

Glasswing is a coordinated effort to use Mythos-class AI to find and patch vulnerabilities in critical software before adversaries can exploit them, autonomously, at scale, across codebases that have survived decades of human review and millions of automated tests. A 27-year-old OpenBSD vulnerability. A 16-year-old FFmpeg flaw that automated tooling had hit five million times without catching. Linux kernel privilege escalation via chained zero-days. These are real findings, and patching them before adversaries exploit them is unambiguously good.

The implicit theory of defense is: find and fix vulnerabilities faster than attackers can weaponize them and defenders win. That logic is sound at the software layer. The problem is it addresses only one leg of the race, and not the leg that's currently losing fastest.

The actual new thing is velocity at the operational layer

AI-enabled attack chains don't primarily create new vulnerability classes. What they compress is the interval between access and impact, and that interval was already collapsing before Mythos.

  • 29 minutes: Average eCrime breakout time in 2025 (CrowdStrike). Fastest observed: 27 seconds.
  • 5 days: Median intrusion-to-ransomware in 2025, down from 9 days the year before (Sophos).
  • 89%: Year-over-year increase in AI-enabled adversary operations (CrowdStrike 2026 GTR).
  • ~70 minutes: Initial infection to enterprise-wide ransomware deployment in one documented case (M-Trends 2026).

Microsoft's RSAC 2026 briefing documented AI embedded across the full attack lifecycle: reconnaissance, credential-lure generation, deepfake-assisted initial access, automated persistence, and in some cases automated ransom negotiation. The threat intelligence loop was already too slow for the fastest attackers. AI acceleration doesn't break a healthy loop. It exposes one that was already broken.

Patching the OpenBSD vulnerability is necessary. It does nothing about the attacker who has already achieved initial access and is operating in your environment faster than your SOC can triage an alert.

The patch is also a signal

There's a tension in the Glasswing framing worth naming. Mythos demonstrably works on source code. Autonomous exploitation of compiled binaries without source access remains a harder, unsolved problem, and that's a real constraint on the threat model. But it understates something practitioners who've done patch diffing will recognize immediately: the patch release is itself a signal. The moment a vendor ships a fix, an attacker doesn't need the original source. They need the diff. Reverse engineering what a patch corrected and working backward to the exploitable pre-patch state has been standard offensive tradecraft for years. Mythos-class capability on the offensive side compresses how fast that window gets worked.

Defenders who want to benefit from Glasswing need to treat the resulting patches differently than routine patch Tuesday updates. The vulnerability disclosure and the exploitation window now potentially overlap. Organizations should verify they have the internal capability to apply Glasswing-sourced patches on an emergency cadence, independent of normal change management cycles. If you can't move faster than an attacker can read a diff, the defensive advantage Glasswing promises doesn't fully materialize.

The gap Glasswing doesn't address

Glasswing represents a genuine coordination model: industry, government, and open-source maintainers aligned around a shared defensive capability. That structure is exactly right. What doesn't yet exist is anything like it at the operational layer. AI-enabled detection and response that can match the speed of AI-enabled attack chains, with coordinated accountability baked in.

What I'd actually want to exist, and largely doesn't yet, is a structural separation between the organizations defending you and the organizations stress-testing that defense. An AI-enabled response capability that can take autonomous action at machine speed needs to be held accountable by something that can attack at the same speed. A vendor assessing its own detection coverage is a conflict of interest at the worst possible moment. That accountability structure has to be designed in, not discovered after an incident.

Most organizations aren't close to this. The harder problem upstream of tooling is decision authority. Tactical containment decisions that currently route through change advisory boards at 2am will lose a race against a 27-second breakout. The defenders who navigate the next phase won't just have better software. They'll have worked out how to delegate consequential decisions at machine speed to people who are empowered to own the outcomes.

Why the FUD framing is also wrong

Accepting exponential projections across every risk domain and using them to justify infinite spend is the mirror image of "this is Tuesday." Both guarantee the status quo. Leaders who receive ungrounded threat assessments will rationally defer the decisions we're asking them to make until something more actionable appears. Our credibility as advisors depends on giving specific, bounded risk guidance, not gesturing at a scary horizon.

There's also a structural problem neither framing addresses: cyber attacks still operate in a near-consequence-free environment for most threat actors. In physical space we aren't protected primarily through hardening. We're protected because people who want to harm us have to weigh the cost of being caught. Public policy investment in using the same AI capabilities to expose threat actors to legal consequences would do more systemic good than any amount of private defensive spend. That's a long game, but it's the honest frame for why defenders are running a fundamentally asymmetric race.

What a defensible posture actually requires

Glasswing is a serious effort by serious people and it deserves a serious response, which means neither dismissal nor panic. The practitioners I trust most share a common intuition: security bugs are dense, not sparse. The more you look, the more you find. The right design assumption is that bugs are present, lateral movement pathways exist, and your architecture needs to limit blast radius accordingly. Zero-trust segmentation is exactly right for this environment, not because it prevents compromise, but because it makes the compromise slower and more detectable.

The harder work is the operational and organizational layer Glasswing doesn't address. The organizations that come out ahead won't just have better patch cadence. They'll have worked through what it means to delegate real authority at real speed and built the accountability structures to match.


References: Anthropic, "Project Glasswing," anthropic.com, April 2026. Anthropic, "Assessing Claude Mythos Preview's Cybersecurity Capabilities," red.anthropic.com, April 7, 2026. CrowdStrike, "2026 Global Threat Report," February 24, 2026. Mandiant, "M-Trends 2026 Report," March 2026. Microsoft Security Blog, "Threat Actor Abuse of AI Accelerates," RSAC 2026, April 2, 2026. Sophos, "The State of Ransomware 2025." Verizon, "2025 Data Breach Investigations Report."

Saturday, April 11, 2026

Lab Notes: Claude Code Session Logs as a Forensic Artifact

 TL;DR;

Claude Code logs every agent action locally in structured JSONL transcripts. These are forensically valuable, generally unprotected, and your GRC and detection teams should know they exist.

Background

AI coding agents like Claude Code are becoming common in developer environments. Unlike a chat interface, these tools operate agentically — they execute bash commands, read and write files, and chain tool calls autonomously on behalf of the user. Users authorize this at session start, often broadly, and may not review every action taken.

This creates a non-repudiation problem. The user is responsible for agent actions, but awareness of specific actions may be limited or absent entirely. From a forensic and compliance standpoint that gap matters.

The Artifact

Claude Code writes a complete session transcript for every run to:

~/.claude/projects/<url-encoded-project-path>/sessions/<session-uuid>.jsonl
~/.claude/history.jsonl

Each record contains the timestamp, message type, tool name, exact command executed, full stdout/stderr, working directory, and token usage. This is not a summary — it is a full structured record of every action the agent took.

These logs exist by default. No configuration required.

Forensic Value

During triage, these transcripts can establish:

  • What commands were executed, in what order, and with what output
  • Which files were read or modified by the agent
  • Session start/end times and working directories
  • Whether the agent spawned subagents and what they did

The artifact is local, human-readable with basic JSON tooling, and does not require any cooperation from Anthropic or cloud infrastructure to collect.

The Problem

These logs have no integrity protection. There is no append-only mode, no tamper detection, and no access controls beyond standard filesystem permissions. An actor who has compromised a developer workstation can delete or modify them.

Recommendations

For DFIR, GRC and Detection Engineering teams:

  1. Add ~/.claude/projects/ and ~/.claude/history.jsonl to your endpoint forensic triage collections alongside shell history and other user-space artifacts
  2. Audit your AI tool inventory — Claude Code, Copilot, Cursor, and similar tools likely produce analogous artifacts. Verify what each logs and where
  3. Require that commercial and in-house AI agent deployments log agent actions with sufficient detail for post-incident review, and that those logs ship to a protected destination
  4. Baseline a SOC alert for deletion or bulk modification of agent log directories on developer endpoints — the signal-to-noise should be low and the fidelity high

Reference

Notebook for parsing Claude Code sessions into a forensic timeline: https://github.com/DFIR-DeRyke/dfir_oneoffs/blob/main/claude_timeline.ipynb

Show and Tell

A simple request in my lab executed 97 commands. The claude_timeline notebook was built to simplify human peer review of machine actions — here's a sample of what that output looks like:


Friday, June 7, 2019

Lab Notes: Persistence and Privilege Elevation using the Powershell Profile

TL;DR;

A recent ESET blog post mentions a persistence technique I wasn't previously aware of that is in use by the Turla APT group.  The technique leverages the PowerShell profile to sabotage PowerShell in a way that executes arbitrary code every time Powershell is launched, upon testing I've discovered this technique may also provide a low and slow vector to Domain Admin, and other privileged admin or services accounts by leveraging common flaws admin scripts, asset management systems, and enterprise detection and response tools. This post captures my observations working from Matt Nelson's 2014 blog post (Apologies to the researcher if there is prior art I'm unaware of at the time of this post)

Privilege Elevation - Local Admin to Sloppy Admin


Setup Requirements:


  1. In my testing, you need local admin rights to create the global profile
    1. $profile.AllUsersAllHosts 
    2. AKA C:\Windows\System32\WindowsPowerShell\v1.0\profile.ps1
  2. This does not bypass Execution Policy (check with Get-ExecutionPolicy).  
    1. If it's set to AllSigned or Restricted, not only will the code not execute; the end user might notice a suspicious error message reminding them of the execution policy. (By default a Window 10 endpoint is Restricted) 
  3. A privileged user or preferably an automated task that runs PowerShell on the 0wned box with elevated domain privileges is needed.  They also need to forget  to pass –NoProfile flag when launching it (which seems like just about everything and everybody in a large enterprise)  
Now any code you place in this global profile will be run by any user who launches PowerShell. We just decide what kind of PowerShell script we want our sloppy admin to execute, set our trap, and patiently wait. 

As a POC I used 1 line of code: 
Add-Content c:\windows\temp\test1.txt "$(Get-Date) Profile POC Launched by $(whoami)"

Within the hour a friendly enterprise asset management system ran my arbitrary code using SYSTEM, but with a phone call to IT and some trivial social engineering, this could have easily been one of the desktop admins.

Mitigation:

  1. Similar to detecting persistence in the startup folder, if you can audit file writes and modifications to C:\Windows\System32\WindowsPowerShell\v1.0\profile.ps1 you can alert on this in real time.  Most userbases will not be making frequent changes to this file, which should leave you with a low noise high fidelity alert
  2. If you need another reason to preach the gospel of a restrictive PowerShell execution policy this may be it. Unfortunately, if your admins are using it already good luck telling them they can't use PowerShell
  3. You can also audit to ensure any privileged accounts executing PowerShell on remote systems always invokes the –NoProfile command line argument

Persistence 



For persistence, things are much simpler. Aforementioned mitigations 1 and 2 still apply, but the only requirement is the lax execution policy.  Every user should have access to edit their own $profile and any code placed here will be executed anytime PowerShell is launched under that user context.

One Line POC:
Add-Content $profile "Invoke-Item C:\Windows\System32\calc.exe"

For detection, we need to monitor a few additional file locations, but the alert volume should still be manageable:

  • C:\Windows\System32\WindowsPowerShell\v1.0\profile.ps1
  • $Home\[My ]Documents\WindowsPowerShell\Profile.ps1
  • $Home\[My ]Documents\WindowsPowerShell\Microsoft.PowerShell_profile.ps1
  • $PsHome\Profile.ps1
  • $PsHome\Microsoft.PowerShell_profile.ps1
  • $Home\[My ]Documents\PowerShell\Profile.ps1
  • $Home\[My ]Documents\PowerShell\Microsoft.PowerShell_profile.ps1


Resources:

  1. Microsoft Documentation On Powershell Profiles https://docs.microsoft.com/en-us/powershell/module/microsoft.powershell.core/about/about_profiles?view=powershell-6
  2. Abusing Powershell Profiles https://enigma0x3.net/2014/06/16/abusing-powershell-profiles/
  3. Turla Powershell Usage https://www.welivesecurity.com/2019/05/29/turla-powershell-usage/

Tuesday, February 20, 2018

Asking Questions. A STORY FOR MOST PEOPLE

[The following is an excerpt from The Manufacturer and Builder Volume 0001 Issue 1 (January 1869).  I came across this in my woodworking research, and want to preserve it here because its centrally relevant across all of the domains I'm interested in.]

Once there was a young man whose name was John.  That is to say, not knowing what his name was, and taking all the chances.  I think it was probably John.  For the same reason I take the liberty of presuming that his other name was Smith.  Having previously been a boy, like the generality of young men. John had learned during that period an art which was almost the only thing that distinguished him from other Johns.  He knew how to ask questions; and the object of this brief sketch of his life is to show how he acquired this accomplishment, and what came of it.

He used to say that his father, who was a farmer gave him the first lessons in asking questions; and putting together what his father told him at different times, he compiled a set of rules on the subject which he showed to a Friend the other day, neatly written on the flyleaves of his pocket-diary.  They were headed,

The Art of Asking Questions

  1. Every man knows something that I do not know.
  2. Every thing, living or inanimate, has something to tell me that I do not know.
  3. It is better to ask questions of things than of men; but its better to ask men than not to ask at all.
  4. Lazy questions, impertinent questions, and conceited questions are the greatest of nuisances.  They are like conundrums without any answers - they tend to make men dislike all questions; and when asked of nature, they get no response from her whatever.
  5. Asking questions is of no use, if a man forgets the replies.
  6. People like to be asked, in the proper time and manner, concerning matters which they understand.  When they refuse to satisfy such inquiries, it is generally because the matter is not their business, or they think it is none of mine.
  7. Remembering a thing is not necessarily be living it.  I will remember whatever is told to me by men or by nature ; but I will bear in mind that men may be mistaken, or that I myself may misunderstand both words and facts.
  8. The way to remember the answer to any question is to associate it in the mind with other answers connected with the same subject.  It is well, therefore, to follow one subject, if possible, until sufficient has been learned about it to be easily remembered; for the more one knows the more one can remember, while isolated facts soon get lost.  As my father said, "Wholesale stores are the easiest to keep in order."
  9. Never be ashamed not to know, but be ashamed not to learn.
  10. Never pretend to know ; as for pretending to be ignorant, there is no danger of that, since all men are ignorant.  Even in asking questions concerning the subjects which I have most carefully studied, I may truly say I desire to learn ; for I may have made mistakes or omissions in my study which another might correct.  As my father said, "Judge Pickerell spent forty years in collecting coins, and found at last a coin that was not in his collection in the hands of a beggar, who had that and nothing else."
  11. As my father said, "Every stone is a diamond unless it is not; therefore every stone may be a diamond, until you know it is not ; and in finding out that it is not a diamond, you may discover that it is something more useful."
  12. As my father said, "A man who is forever asking and never answering is like the swamp in our forty-acre lot.  You can't raise crops without rain on one hand and drainage on the other."


From the foregoing it will be seen that the elder Smith was a man of sense. Certainly his neighbors thought the same thing.  Frequently the judge or the parson or the doctor would come riding by his farm, and the plain farmer would leave his plow and sit upon the rail fence, under the shadow of the great elm, whittling a stick, while they talked with him on various matters of politics or social management.  It was noticeable that he seldom asked other people for their opinions, and they soon learned to be a little shy of offering any; for he was sure to reply, "Indeed, what makes you think so?" and that is a troublesome way of putting it. On the other hand, they were always anxious to get his opinions in exchange for their facts.  As the judge remarked, "Farmer Smith's views are his own, and they are worth hearing.  He doesn't think he is obliged to say something on every subject, whether he understands it or not; and when he does speak, he tells what he knows."

He was always particular to give the source of his knowledge.  He would say, "I have observed," or "I have read" or "As far as I can judge, it seems to me," and the like.  And when others contradicted him, he used to say, "I am very glad to hear your experience on that point, because it is different from mine. I will make note of that."  After he died, they found among his papers a good many notes of this kind with the names of those who had given the information, and marked in the margin with different signs, indicated, according to a method of his own, which he never told any body, the degree of reliance which he thought was to be placed in the authors or their communications.

It must not be supposed that he gave his son John the above set of rules all at once, like a catechism.  On the contrary, as I before hinted, he dropped them in the form of remarks, from time to time on appropriate occasions.  On some of these occasions I shall give examples in the next chapter.  It may be thought that I am writing the life of the wrong Smith. In fact the father and not the son would be my hero, but for the fact that John's greater opportunities, and advantages enabled him to make a more brilliant career outwardly; and the full fruit of the old man's system, as well as the reward for his patience and a good sense, was realized in the success of his son.  After all, however, if health and virtue and good nature and a well-trained mind be success, then old Smith achieved it.

Tuesday, April 4, 2017

Setting Static IP Addresses In VMware Fusion

During malware analysis, I frequently need to flip my analysis VM's between host-only and NAT to alternate between interacting with suspicious websites and man in the middling network traffic with various tools REMnux to simulate command and control traffic without tipping of the malicious operator.

To avoid tinkering with IP settings on my analysis guest machines, I've taken to manually editing the VMware fusion DHCP configurations.  I'm posting this here to help me commit the configuration to long term memory - mainly which files I need to edit - in the hopes that it saves me some googling when updates periodically wipeout this file.  Maybe it will be useful to someone else too.

My configuration (default) for NAT is vmnet 8.
 atom "/Library/Preferences/VMware Fusion/vmnet8/dhcpd.conf"

My configuration (default) for host-only is vmnet 1.
atom "/Library/Preferences/VMware Fusion/vmnet8/dhcpd.conf"

Using the standard dhcpd.conf format, append your static IP assignments to the end of the file.  Static assignments must be outside the DHCP pool declared earlier in the DHCP.conf

####### VMNET DHCP Configuration. End of "DO NOT MODIFY SECTION" #######
host REMnuxVM {
    hardware ethernet 00:0C:DE:AD:B3:EF;
    fixed-address  172.16.59.20;
option domain-name-servers 0.0.0.0;
option domain-name "REMnuxVM";
}
host AnalysisVM {
    hardware ethernet 00:0C:0B:AD:F0:0D;
    fixed-address  172.16.59.30;
option domain-name-servers 172.16.59.20;
option domain-name "AnalysisVM";
option routers                  172.16.59.20;
    option subnet-mask              255.255.255.0;


}


Restart VMware fusion, cycle your guest VM adapters and your Analysis VM will automagically be routing its traffic to REMnux for tampering.  Now you flip from NAT mode to host-only mode where can fakedns, inetsim, and accept-all-ips to your heart's content without mucking around with guest network adaptor settings.  Reverting snapshots is now a breeze. 

sudo /Applications/VMware\ Fusion.app/Contents/Library/vmnet-sniffer -e -w Test.pcap vmnet1
len   84 src 00:0c:29:3d:32:3a dst 00:0c:29:ca:df:05 IP src 172.16.59.30    dst 172.16.59.20     UDP src port 64004 dst port 53
len  100 src 00:0c:29:ca:df:05 dst 00:0c:29:3d:32:3a IP src 172.16.59.20    dst 172.16.59.30    UDP src port 53 dst port 64004

Another perk is that static IP's greatly simplify your capture filters.
tshark -i vmnet1 -f "host 172.16.59.30"
Capturing on 'vmnet1'
    1   0.000000 172.16.59.30 → 172.16.59.20  DNS 84 Standard query 0x0001 PTR 20.59.16.172.in-addr.arpa
    2   0.000298  172.16.59.20 → 172.16.59.30 DNS 100 Standard query response 0x0001 PTR 20.59.16.172.in-addr.arpa A 172.16.59.2
    3   0.012761 172.16.59.30 → 172.16.59.20  DNS 85 Standard query 0x0002 A google.com.AnalysisVM
    4   0.012987  172.16.59.20 → 172.16.59.30 DNS 101 Standard query response 0x0002 A google.com.AnalysisVM A 172.16.59.20