AI-driven cyber threat assessment helps leaders compare cyber risk to natural disasters and prioritize mitigations that reduce mission and economic impact.

AI Helps Put Cyber Threats in Perspective
In 2023, natural disasters drove $280 billion in economic losses. In 2024, they hit $318 billion. By comparison, several high-profile cyber “catastrophe” events clustered across 2023–2024—MOVEit, CDK, Change Healthcare, and CrowdStrike—add up to roughly $8–10 billion in aggregate loss.
Those numbers don’t downplay cyber risk. They correct the mental model. In defense and national security, the real failure mode isn’t “we didn’t care enough about cyber.” It’s we mis-scoped cyber’s impact, treated every incident like an existential crisis, and then built priorities, budgets, and response playbooks around fear instead of measured risk.
Here’s where AI earns its keep in the AI in Defense & National Security series: AI can turn cyber threat assessment from a vibes-based argument into a repeatable discipline. Not by generating more alerts—by producing clearer comparisons, better loss estimates, and decision-ready options.
The cyber catastrophe narrative is often numerically wrong
Answer first: The most common mistake leaders make is equating “widespread disruption” with “widespread destruction.” The economic data consistently shows that physical disasters remain more destructive at national scale than cyber events.
Tom Johansmeyer’s recent update reinforces a point that should be uncontroversial but rarely is: cyber losses can be painful, embarrassing, and operationally serious—yet they tend to cluster into recoverable business interruption and remediation costs, not the sprawling, multi-sector reconstruction costs that follow floods, fires, hurricanes, and earthquakes.
Why does the myth persist?
- Cyber is invisible until it isn’t. When services fail, there’s no rubble to photograph—so stories fill the gap.
- Cyber impacts are hard to price in real time. “We don’t know yet” becomes “it could be anything.”
- Worst-case thinking gets rewarded. In security culture, pessimism can look like rigor.
A better stance for national security leaders is blunt: treat cyber as a serious operational risk, not a substitute for all-hazards resilience. You don’t protect a nation by building the perfect cyber program while underinvesting in physical continuity, supply chains, and emergency management.
Why cyber losses hit a ceiling: reversibility and restoration
Answer first: Cyber events are often economically constrained because many effects are reversible once systems are restored, rebuilt from backups, reimaged, or fail over to alternate processes.
That constraint doesn’t mean cyber is “easy.” It means cyber recovery is frequently more like restarting complex machinery than rebuilding a city.
Johansmeyer’s earlier comparison to hurricanes captures the practical reality: a major storm creates tens of thousands of physical repair points, logistics bottlenecks, and access problems. Cyber incidents can be sprawling, but responders can often centralize action:
- restore known-good images
- rotate credentials and keys
- rebuild identity and access control paths
- validate data integrity
- reconstitute critical services in priority order
The exception that proves the rule: when cyber mimics physical damage
Cyber losses climb when digital compromise causes irreversible real-world effects. In defense terms, that usually requires one of the following:
- Safety impacts (loss of life, injury, contamination)
- Long-lived equipment damage (industrial control systems pushed beyond limits)
- Persistent data corruption (not just theft—alteration of records)
- Cascading multi-sector disruption (power, comms, transport simultaneously)
Those scenarios are rarer than headlines suggest. They’re also where AI-based scenario modeling and mission-impact analysis matter most—because humans tend to either dismiss them (“too extreme”) or panic (“we’re doomed”).
What AI changes: cyber risk becomes comparable, not sensational
Answer first: AI helps decision-makers compare cyber threats to other national risks by producing consistent estimates, ranked scenarios, and traceable assumptions.
Most cyber briefings fail the same way: they’re either a threat catalog (“here are 47 TTPs”) or a compliance status report (“we’re 82% done with controls”). Neither answers the leader’s real question:
“What’s the plausible loss, what’s the worst case, and what should we do this quarter?”
AI can support that decision cycle—if it’s designed for analysis, not theater.
1) AI-assisted loss estimation (with uncertainty, not false precision)
A practical approach I’ve seen work is building an AI-assisted model that estimates losses across a few buckets:
- Business interruption: downtime, degraded throughput, manual workarounds
- Response and recovery: IR labor, forensics, restoration, overtime
- Third-party ripple effects: suppliers, downstream customers, delayed programs
- Regulatory/legal: reporting, penalties, litigation
- Strategic costs: mission delay, deterrence signaling, loss of confidence
The AI’s job isn’t to “pick a number.” It’s to:
- summarize comparable historical incidents
- propose ranges and drivers (e.g., “downtime dominates cost at X scale”)
- highlight which assumptions change the outcome most
That last bullet is the leader’s gold. Sensitivity analysis is strategy.
2) AI for mission-impact mapping (the defense-grade version of “what breaks?”)
In national security environments, the question isn’t “will we get ransomware?” It’s:
- Which missions degrade first?
- Which dependencies create hidden single points of failure?
- Which workarounds are credible under stress?
AI can help map this by correlating:
- asset inventories
- identity graphs
- network flows
- application dependency maps
- operational plans and readiness metrics
The output should be a mission impact scorecard that can be reviewed like a readiness brief, not a security presentation.
3) AI triage that reduces noise instead of adding it
Security teams don’t need AI to generate more tickets. They need it to:
- cluster related alerts into incidents
- prioritize by mission/asset criticality
- propose containment steps aligned to policy
- surface “unknown unknowns” (novel combinations of weak signals)
The standard to aim for is simple: fewer interrupts, higher confidence, faster containment.
A 2026-ready way to brief cyber risk to senior leaders
Answer first: Replace “threat hype” with an all-hazards scorecard where cyber is one column—measured with the same discipline as storms, fires, and supply chain shocks.
If you’re advising commanders, agency heads, or critical infrastructure executives, use a format they can act on. Here’s a template that forces clarity.
The 5-line cyber risk brief
- Most likely scenario (90 days): what happens, where, and how it’s detected
- Operational impact: which missions/services degrade and for how long
- Economic impact range: low/expected/high with top 3 drivers
- Decision points: what requires leadership approval (tradeoffs)
- Mitigations: the 3 moves that most reduce expected loss
AI helps by drafting this brief from telemetry and past incidents, but humans must own the assumptions.
A simple comparison that changes budget fights
If you want cyber in perspective, require every major initiative to state:
- Expected annual loss reduction (in dollars or mission-hours)
- Time to benefit (weeks vs quarters vs years)
- Residual risk after implementation
Then compare cyber initiatives to physical resilience investments. Leaders usually discover two uncomfortable truths:
- Some cyber controls are expensive for modest risk reduction.
- Some “boring” resilience measures (segmentation, backups, exercised continuity plans) dominate outcomes.
That’s not anti-cyber. It’s pro-results.
Where leaders still get burned: the three cyber scenarios that matter
Answer first: Even if natural disasters cost more overall, cyber becomes strategically dangerous when it hits scale, trust, or timing.
1) Scale: a single provider becomes a national dependency
Incidents like those affecting widely used software, managed services, or healthcare clearinghouses create concentrated risk. AI can help identify these exposures by building dependency graphs across:
- vendors and sub-vendors
- identity and authentication pathways
- update channels and software supply chain
A practical action: maintain a continuously updated “critical vendor map” and run tabletop exercises on your top 10 dependencies.
2) Trust: data integrity attacks beat data theft
Stealing data is bad. Corrupting data is worse—because it attacks decision-making. In defense, that can mean:
- readiness reporting errors
- maintenance record manipulation
- targeting or logistics data poisoning
AI can help detect integrity issues using anomaly detection and cross-source validation, but governance matters: you need authoritative sources, provenance tracking, and rollback procedures.
3) Timing: cyber used as a distraction during crisis
The most dangerous cyber event is the one paired with something else—geopolitical escalation, natural disaster response, or a major domestic emergency. That’s why “all-hazards” isn’t a slogan; it’s a staffing and continuity problem.
AI-enabled fusion cells can help by correlating cyber signals with broader intelligence and operational indicators, reducing the chance that teams chase decoys while the real crisis unfolds.
What to do next: a practical AI-enabled risk program
Answer first: Build an AI-enabled cyber threat assessment program that produces decision artifacts—loss ranges, mission impacts, and prioritized mitigations—every month.
Here’s a straightforward path that works for defense, national security, and critical infrastructure organizations:
- Define your unit of impact: dollars, mission-hours, or service-level degradation
- Instrument the basics: asset inventory, identity telemetry, backup status, dependency mapping
- Stand up a risk model: scenario library (ransomware, supply-chain compromise, insider, integrity attack)
- Add AI where it improves speed and consistency: alert clustering, incident summarization, loss-range drafting
- Run quarterly exercises: validate assumptions against real response friction
If your AI output can’t be audited—inputs, rationale, uncertainty—it won’t survive serious scrutiny. Treat transparency as a security feature.
National security leaders don’t need cyber to be “bigger than hurricanes” to take it seriously. They need cyber framed correctly: a high-frequency operational threat with occasional strategic spikes, best managed with disciplined measurement and AI-assisted analysis.
The next time your team briefs cyber risk, try this: show cyber’s loss range next to natural disasters, supply chain disruptions, and infrastructure outages. If the comparison feels uncomfortable, that’s a sign you’re finally looking at the problem clearly.
Where should AI in defense and national security go next—toward faster detection, or toward better decisions about what we’re willing to tolerate?