Strategy isn’t dead—it’s overloaded. See how AI supports mission planning, intel analysis, surveillance, and cyber resilience when conflicts won’t sit still.

AI-Ready Strategy for Modern Defense Uncertainty
Strategy feels harder right now because the operating environment punishes long plans and rewards fast adaptation. Conflicts sprawl across regions, escalation risks are weirdly non-linear, and the information space moves faster than formal decision cycles. When national security leaders argue about whether “strategy” is even possible anymore, they’re really arguing about something more practical: Can a government still connect political goals to military means in a world that won’t sit still?
That debate runs through a recent War on the Rocks conversation featuring Frank Hoffman, Justin Logan, Rebecca Friedman Lissner, and Ryan Evans. Their discussion ranges across Europe, the Middle East, Latin America, and the Indo-Pacific—exactly the kind of multi-theater reality that makes coherent statecraft feel elusive.
Here’s my stance: strategy is still possible, but only if you build it for uncertainty. And in 2025, building for uncertainty means being explicit about what AI can and can’t do for mission planning, intelligence analysis, autonomous systems, surveillance, and cybersecurity. AI won’t “do strategy” for you. It can keep strategy from collapsing under the weight of modern complexity.
Why strategy feels broken in 2025
Answer first: Strategy feels broken because policymakers are trying to run 20th-century planning habits against 21st-century feedback loops.
Modern conflict doesn’t give you the courtesy of stable assumptions. Political constraints shift with coalition politics, viral narratives, economic shocks, and adversary improvisation. Meanwhile, military and intelligence systems generate oceans of data that outpace human synthesis.
Three friction points show up again and again in real-world planning:
1) Too many theaters, not enough attention
U.S. and allied decision-makers are juggling deterrence in the Indo-Pacific, continued pressure from Russia in Europe, instability and escalation risks in the Middle East, and persistent competition in the Western Hemisphere. Even if budgets rise, attention is finite. When leaders bounce between crises, strategy turns into a sequence of urgent memos.
2) The time constant mismatch
Operations can change in minutes. Procurement changes in years. Diplomacy changes in weeks or months. Political narratives can flip in hours. This mismatch creates a predictable failure mode: plans are either detailed and obsolete, or vague and useless.
3) The fog isn’t just thicker—it’s weaponized
Deception, cyber operations, influence campaigns, commercial satellite imagery, cheap drones, and open-source intelligence have changed the character of ambiguity. The fog of war is now partially produced on purpose—and distributed at scale.
This is where the War on the Rocks discussion matters to the “AI in Defense & National Security” series. If the core question is “Is strategy possible now?”, the operational question becomes: How do we keep strategy coherent when the environment is designed to overload human decision-making?
The real job of strategy: tradeoffs you can defend
Answer first: Strategy works when it forces explicit tradeoffs—what you’ll prioritize, what you’ll defer, and what you’ll stop doing.
It’s tempting to treat strategy as a document. In practice, strategy is a discipline: a repeatable way to align goals, resources, and risk.
The War on the Rocks panel’s “spicy takes” across regions point to an uncomfortable truth: we often substitute activity for prioritization. More deployments, more statements, more task forces—without sharper decisions about ends and means.
Here’s a blunt test I use when reviewing defense strategies: if it doesn’t say what you won’t do, it’s not a strategy.
Strategy breaks when assumptions are implicit
Most failures aren’t caused by having no plan. They’re caused by unstated assumptions that don’t survive contact with reality:
- Assumptions about adversary restraint (or irrationality)
- Assumptions about ally capacity and political staying power
- Assumptions about industrial base throughput
- Assumptions about information dominance
The fix isn’t “perfect forecasting.” The fix is making assumptions visible, updating them quickly, and building contingencies.
This is exactly where AI can help—not by replacing judgment, but by forcing clarity and accelerating feedback.
Where AI actually helps strategy (and where it doesn’t)
Answer first: AI strengthens strategy when it improves sensing, synthesis, and decision tempo—but it weakens strategy when leaders treat models as oracles.
AI in national security is often described in sweeping terms. The practical value is narrower and more useful: AI can reduce the cost of understanding the environment and exploring options. It can’t resolve value conflicts, alliance politics, legal constraints, or questions of national purpose.
AI for intelligence analysis: from “more data” to “more signal”
Intelligence teams have a throughput problem. Analysts spend time triaging, translating, deduplicating, and chasing false correlations. Modern AI pipelines can help by:
- Clustering reports and open-source material into coherent event threads
- Highlighting anomalies (e.g., unusual logistics movements)
- Summarizing multi-source reporting with traceable citations internally
- Rapidly translating and extracting entities from multilingual streams
The strategic payoff: faster sensemaking, which supports faster tradeoffs. If policymakers can get to “what changed?” and “what matters?” in hours rather than days, strategy becomes more resilient.
The strategic risk: model-driven mirages. If collection gaps exist, AI will confidently fill them with pattern-matched guesses. Teams need strong “challenge” functions—red teams, alternative hypotheses, and audits.
AI for mission planning: faster branches and sequels
In uncertain environments, the best plans aren’t the most detailed. They’re the most adaptable. AI-enabled mission planning can help staffs generate and compare options under constraints:
- Force packages vs. timelines vs. logistics limits
- Courses of action mapped to escalation ladders
- Sensitivity analysis (“If assumption X fails, what breaks first?”)
This doesn’t eliminate command responsibility. It gives commanders a better menu of options and clearer risk statements.
A useful mental model: AI is a staff multiplier, not a commander substitute.
Autonomous systems and surveillance: persistence at scale
The panel conversation spans multiple theaters—exactly where persistent ISR and maritime domain awareness become make-or-break. AI helps here by enabling:
- Automated detection and tracking across electro-optical, radar, and acoustic sensors
- Swarm coordination concepts for unmanned systems (within tight rules)
- Real-time cueing (sensor-to-shooter workflows with human authorization)
The strategic upside is endurance: you can watch more, longer, with fewer people.
The strategic downside is escalation risk and misinterpretation. More sensors can create more “false certainty.” Strategy gets brittle if it assumes perfect transparency.
Cybersecurity: compressing the time to contain
In 2025, cyber defense is not “an IT problem.” It’s a readiness problem. AI can help by:
- Detecting anomalous behavior across endpoints and identity systems
- Prioritizing vulnerabilities based on exploit activity and mission criticality
- Automating portions of incident response playbooks
The strategic benefit is straightforward: resilience buys freedom of action. If your networks and data pipelines are brittle, your strategy will be cautious and reactive.
A practical framework: “Strategy as a learning system”
Answer first: The most durable approach is treating strategy as a learning system with short feedback loops—AI helps you run those loops faster.
Instead of aiming for one master plan, treat strategy as a cycle that’s explicit about goals, assumptions, metrics, and triggers.
Step 1: Write strategy as testable claims
A strategy should contain statements you can test, not just aspirations.
Examples of testable claims:
- “Forward posture X increases deterrence by raising adversary uncertainty.”
- “Capability Y reduces risk to shipping lanes within Z days of activation.”
- “Partner capacity program A will produce B deployable units by date C.”
These claims create a basis for measurement.
Step 2: Instrument the environment (ethically and legally)
If you want strategy to learn, you need observable indicators. This is where AI-enabled intelligence analysis and surveillance matter.
Good indicators are:
- Specific (measurable)
- Hard to spoof (or paired with counter-deception checks)
- Linked to decisions (if indicator changes, you do something)
Step 3: Build “decision triggers,” not just reporting
Most organizations are good at reporting and bad at deciding.
A trigger can be as simple as:
- “If shipping losses exceed X per week, shift convoy posture and air coverage.”
- “If adversary missile reload tempo exceeds Y, change basing dispersal.”
- “If partner force readiness drops below Z, pause operations dependent on it.”
AI helps by monitoring indicators continuously and alerting when thresholds are crossed, with context attached.
Step 4: Run wargames that include AI failure modes
If AI is part of your planning and sensing, then AI degradation must be part of your exercises.
Include scenarios such as:
- Data poisoning and synthetic media shaping “ground truth”
- Sensor denial (jamming, deception, camouflage)
- Model drift over time as adversaries adapt
- Communications constraints that break cloud-dependent workflows
The goal isn’t paranoia. It’s designing strategy that can still function when the AI layer is stressed.
Strategy that depends on perfect information isn’t strategy. It’s a wish.
Common “people also ask” questions leaders raise
Answer first: The right questions focus on accountability, reliability, and speed—not hype.
Can AI fill the strategic gap in unpredictable conflicts?
AI can’t supply political purpose or coalition cohesion. It can reduce uncertainty, shorten analysis cycles, and pressure-test plans. It fills the execution gap more than the meaning gap.
Will AI make decisions faster than adversaries?
Sometimes. But speed without correctness is self-harm. The real goal is decision advantage: timely decisions that remain aligned with objectives and constraints.
How do we prevent AI from pushing escalation?
Use doctrine and design:
- Keep human authorization for lethal action
- Separate detection from engagement decisions where feasible
- Require multi-source corroboration for high-consequence actions
- Audit models for false positives under stress
What to do next if you’re responsible for defense AI
Answer first: Start with mission outcomes and risk controls, then choose AI capabilities that measurably improve decisions.
If you’re in a defense organization, a government contractor, or a policy shop building “AI for national security,” here’s what works in practice:
- Pick one decision that matters (targeting support, logistics routing, cyber triage, ISR cueing) and map its inputs/outputs.
- Define success metrics that commanders and operators care about (time-to-detect, false alarm rate, planning cycle time, mission availability).
- Design for degraded operations (limited connectivity, contested EM spectrum, partial data).
- Build governance in from day one (model audits, data lineage, role-based access, human override).
- Integrate into workflows, not slide decks. If it doesn’t fit the staff battle rhythm, it won’t be used.
This is how AI becomes a strategic enabler rather than another pilot project.
Strategy is possible—but only if it’s built for uncertainty
The War on the Rocks conversation circles a real anxiety: the sense that American statecraft is reacting rather than shaping. I don’t think the fix is searching for a single grand plan that covers every region and contingency.
The fix is simpler and harder: prioritize clearly, state your assumptions, and run strategy as a learning system. AI helps when it increases clarity, compresses feedback loops, and makes it easier to test whether your theory of success matches reality.
If you’re building capability in the “AI in Defense & National Security” space, the opportunity isn’t to promise perfect prediction. It’s to help leaders answer one question faster and more reliably: What should we do next, and what will it cost?
What would change in your organization if every major plan came with explicit triggers, monitored indicators, and an AI-assisted “assumption dashboard” that leadership actually trusted?