Gen. Caine’s focus on 2nd- and 3rd-order effects shows where AI decision support belongs in defense: faster options, clearer risks, stronger oversight.

AI for Military Options: Seeing 2nd-Order Risks Fast
A single phrase from the U.S. military’s top officer explains why AI in defense and national security is becoming less optional and more operational.
“Our job is to present the range of options… with all of the secondary and tertiary considerations… so that a President can make whatever decision he wants to make — and then we deliver.”
That’s Chairman of the Joint Chiefs of Staff Gen. Dan Caine, speaking in early December at the Reagan National Defense Forum. Most people heard the headline topics—Venezuela, NATO, Ukraine, China, the Middle East. The part that matters for anyone building, buying, or governing defense AI is the method: Caine is describing a decision process where second- and third-order effects are not a footnote. They’re the work.
If your mission planning tools, intelligence analysis workflows, and acquisition cycles can’t keep up with that demand—especially under time pressure—leaders fall back on instinct, precedent, or incomplete staff work. That’s how you end up buying behind the technology curve and deciding behind the reality curve.
What “secondary and tertiary considerations” really means
It means the primary effect is rarely the hard part. The hard part is what happens next—politically, operationally, legally, economically, and informationally.
When a senior military leader frames his role this way, he’s implicitly acknowledging a truth that planners live with daily: nearly every major option has a long tail of consequences that show up in places you didn’t initially model.
A practical breakdown: the consequence stack
Here’s a plain-language way to think about “secondary and tertiary considerations” in national security decision-making:
- Immediate operational effect (e.g., disable a target, surge forces, increase ISR)
- Adversary adaptation (countermeasures, dispersal, escalation ladders)
- Regional spillover (neighbors react, borders harden, proxy activity rises)
- Alliance dynamics (burden-sharing, basing access, political support)
- Domestic trust and legitimacy (oversight, public confidence, narratives)
- Sustainment and industrial capacity (munitions burn rate, repair cycles)
- Precedent (what this “normalizes” for the next crisis)
Humans can reason through all of this. The problem is speed, scale, and complexity—especially when you’re monitoring multiple hotspots at once.
AI doesn’t replace judgment. It expands the surface area of what judgment can credibly consider in time.
Trust, oversight, and the real “battle space” at home
Public confidence and congressional trust are operational variables now. Caine didn’t lean into the details of controversial strike reporting; instead, he emphasized the need to “earn” trust through Congress. That’s not just messaging. It’s a constraint that shapes what options are viable.
This is where defense AI programs can succeed—or implode.
What AI systems must provide to be usable at the top
If an AI-enabled decision support system produces recommendations without a credible trail of evidence, it won’t survive the environment Caine is describing. Modern military advising requires tools that can withstand:
- Oversight scrutiny (what data was used, what assumptions were made)
- Auditability (how outputs changed over time, and why)
- Bias and error analysis (what the model tends to miss)
- Policy boundaries (what is not being recommended because it’s not authorized)
In practice, this pushes teams toward explainable AI patterns that are less about pretty “reasoning” text and more about structured transparency:
- Confidence intervals, not single-point answers
- Traceable sourcing, not vague summaries
- Scenario comparisons, not one “best” option
If you’re selling AI into defense, I’ve found this is the fastest way to lose credibility: acting like the output is the product. The product is decision-grade accountability.
Ukraine’s lesson: mass, attritability, and algorithmic planning
Caine’s Ukraine comments were careful but revealing. He highlighted two realities that are shaping 2026 planning cycles across the Joint Force:
- Industrial-scale drone production (tens of thousands to hundreds of thousands)
- The need for a “high-low mix” and more attritable systems
Mass is back. Attrition is back. And that forces a change in how planning is done.
Why AI matters when “mass” becomes the requirement
When you move from boutique systems to large numbers of attritable assets, the planning problem changes from “Can we do it?” to “Can we coordinate it, supply it, deconflict it, and adapt it faster than the adversary?”
That’s an AI-shaped problem because it’s dominated by:
- High-frequency sensor data and ISR tasking
- Rapid target updates and time-sensitive decision cycles
- Continuous route, spectrum, and fires deconfliction
- Logistics optimization under contested conditions
This is where AI mission planning becomes the difference between theoretical capacity and actual combat power.
A useful mental model: in attritable warfare, commanders don’t just need a plan; they need a plan generator that keeps producing viable options as the environment changes.
“Non-kinetic exchanges” and the expanded kill chain
Caine also mentioned unprecedented exchanges in kinetic and non-kinetic space. That’s a nod to the reality that cyber operations, electronic warfare, influence campaigns, and economic pressure are all interacting with conventional operations.
AI’s role here isn’t sci-fi autonomy. It’s correlation and prediction:
- Detecting coordinated cyber + EW patterns early
- Forecasting likely adversary information operations themes
- Stress-testing C2 resilience when comms degrade
If your analytics can’t fuse these signals into a coherent picture, “secondary and tertiary considerations” become guesswork.
China, multiple dilemmas, and why speed beats perfect
Caine’s framing on China was direct: the Joint Force aims to create multiple simultaneous dilemmas so adversaries stay cautious about threatening Americans. That concept is older than modern AI, but the execution now depends on AI-enabled sensing, targeting, and decision advantage.
Multiple dilemmas require multiple synchronized “threads” of action. And synchronization is a data problem.
Where AI fits in dilemma creation
In practical terms, AI in national security supports dilemma creation by compressing the time between:
- sensing → understanding → choosing → acting
That compression can come from:
- Automated intelligence analysis that flags anomalies and intent indicators
- Predictive analytics that models adversary reactions under different postures
- Wargaming and simulation that compares options across theaters
- Course-of-action generation that offers commanders real alternatives, not one path
The uncomfortable truth: senior leaders often don’t need a perfect forecast. They need a fast, defensible range.
That maps almost exactly to Caine’s description of advising: provide a range of options, with consequences, quickly enough to matter.
Buying “in front of the technology curve” takes more than tech
Caine’s acquisition remarks were some of the most actionable in his public comments: the U.S. system buys behind the technology development curve, and the culture has to change—inside government and companies.
He also made a point that program managers and founders should tattoo on their roadmap:
“We have to write better contracts… [and] share risk between us and the private sector.”
What culture change looks like in defense AI procurement
If you want AI capabilities that keep pace with 2026 realities, procurement has to shift from one-time “delivery” to continuous improvement. That usually means:
- Contracting for iterations (monthly/quarterly model updates)
- Funding data readiness (labeling, governance, secure pipelines)
- Measuring outcomes with operational metrics (time-to-detect, false positives, analyst workload)
- Ensuring models can be retrained and revalidated without restarting the program
Defense leaders are increasingly allergic to AI demos that can’t survive contact with:
- classified environments
- contested data availability
- changing adversary tactics
- legal/policy constraints
So the real question isn’t “Does the model work?” It’s:
“Can the model keep working after the world changes?”
A simple checklist: “decision-grade AI” in defense
For teams building AI for mission planning and decision support, here are practical requirements that map to Caine’s “secondary/tertiary” mindset:
- Scenario breadth: can it compare at least 3–5 credible COAs quickly?
- Consequence mapping: does each COA include 2nd/3rd-order impacts (alliance, escalation, domestic legitimacy, sustainment)?
- Traceability: can every key claim point to data and assumptions?
- Human control: does the workflow keep commanders and analysts in charge?
- Resilience: does it degrade gracefully when data is missing or jammed?
If you can’t answer these, you don’t have military decision support—you have an analytics prototype.
People also ask: what does AI actually do for senior military advising?
Can AI recommend military action?
AI can generate and compare courses of action, but in real defense governance it should function as decision support, not decision authority. Humans own intent, legality, and accountability.
What’s the biggest risk of using AI in national security planning?
Over-trust is the fastest failure mode. If leaders treat probabilistic outputs as certainties—especially in escalation scenarios—AI becomes a liability. Transparent uncertainty is a feature.
Where should defense organizations start?
Start where the payoff is immediate and the governance is manageable: intelligence triage, logistics optimization, and simulation-driven wargaming tied to real operational questions.
What to do next: turn “consequences” into an AI advantage
Caine’s core message—options plus consequences—should be treated as a design requirement for AI in defense and national security. Tools that merely speed up the first-order answer won’t be trusted when the stakes are geopolitical blowback, alliance fracture, or loss of domestic legitimacy.
Teams that win in 2026 will build AI that helps leaders see around corners: scenario-based, auditable, resilient decision support that can explain what changes if you pick option A instead of option B.
If you’re evaluating AI for mission planning, intelligence analysis, or strategic wargaming, a useful next step is to pressure-test your current workflow:
- Where do second- and third-order risks get captured today?
- How long does it take to update those assessments when assumptions change?
- Which parts are bottlenecked by data access, staffing, or tool limitations?
Answer those honestly, and you’ll know whether your organization is buying behind the technology curve—or building the capacity to advise in front of it.
What would change in your decision process if you could generate credible second- and third-order consequences in minutes, not days?