AI wargaming can rebuild operational art for China scenarios—if it strengthens decisive-point thinking instead of automating flawed plans.

AI Wargaming: Rebuilding Operational Art for China
A 2022 Department of Defense wargame set in 2034 ended in an ugly surprise: a Blue team with serious air and maritime capability still lost fast. Not because the force was “out-teched,” but because planners struggled to connect strategic intent to operational action—misreading decisive points, mislabeling decision points, and scattering combat power across disconnected efforts.
That failure should bother anyone working in AI in defense & national security, not because “AI will fix doctrine,” but because AI is increasingly being asked to support mission planning, strategic simulation, and operational decision-making in exactly the kinds of peer-conflict environments where this atrophy shows up.
Here’s the stance I’ll take: If we pour AI into broken operational thinking, we’ll just get faster confusion. The upside is real, though. Used correctly, AI-enabled wargaming and AI decision support can help planners relearn operational art under pressure—before they have to do it for real.
Operational art is the missing middle—and it’s been weakening
Operational art is the level where strategy becomes a campaign you can actually run. It’s the connective tissue that translates national objectives into sequences of military actions across time, space, and domains.
For decades, much of the U.S. and allied force was shaped by counterinsurgency, stability ops, and crisis response. Those experiences built important competencies—advising, partnering, governance support, force protection, and sustainment in permissive-to-semi-permissive environments. But they also trained habits that don’t map cleanly to a high-end Indo-Pacific fight where:
- Time is compressed (hours and days, not months)
- Networks are degraded (jamming, cyber, spoofing)
- Ranges are long (logistics and basing are decisive)
- Adversaries adapt (human red teams, not scripted vignettes)
A planner can be brilliant at process and still be unprepared for the operational demands of a China scenario. The result is what the wargame described: action without coherent operational design.
Decisive points vs. decision points: the confusion that kills campaigns
The source article highlights a pattern that shows up in real planning rooms: teams confuse decision points (moments a commander must choose) with decisive points (things that, when acted on, materially change the campaign).
A clean way to say it:
A decision point is about you. A decisive point is about the enemy system.
If you can’t identify decisive points in relation to the adversary’s center of gravity, you’ll either:
- Chase targets that feel productive but don’t add up, or
- Spend the whole campaign “setting conditions” until the enemy has already won.
Centers of gravity still matter—just not as a checklist item
Centers of gravity are often taught like a staff drill: fill in the boxes, brief the slide, move on. That’s how you get vague answers (“their will,” “their leadership,” “their A2/AD”) that don’t actually guide sequencing or resource allocation.
In peer conflict, a useful center of gravity analysis should do three things:
- Name what must remain functional for the enemy to succeed
- Explain why it’s hard to replace quickly
- Reveal leverage—the pathway from your actions to their loss of freedom of action
If an operational plan can’t show those linkages, it’s not operational art. It’s activity.
What the 2034 wargame teaches AI teams (not just planners)
The wargame’s lesson isn’t only “PME needs more operational art.” It’s also a warning to anyone building AI-enabled mission planning tools, AI decision support systems, or AI wargaming platforms.
The biggest failure mode for AI in operational planning is simple: optimizing the wrong thing.
If a staff is already dispersing effort, an AI tool that recommends “more efficient dispersal” (better target lists, cleaner timelines, prettier dashboards) will make the plan look stronger while it remains strategically hollow.
The real problem: integration, mass, and synchronization are hard to “see”
In the wargame account, Blue’s efforts were fragmented—air, maritime, subsurface actions executed in isolation.
That’s not just a doctrinal issue; it’s a systems visibility issue. Humans struggle to hold multi-domain interactions in working memory. They also struggle to see second- and third-order effects when timelines, basing, sustainment, deception, and cyber all collide.
This is where AI has genuine promise—if it’s designed to support operational art rather than replace it.
AI-enhanced wargaming: the fastest way to expose operational debt
Free-play, human-adjudicated wargames do something PowerPoint planning cannot: they introduce an opponent who reacts. That reaction forces planners to confront whether their “decisive points” are real or imaginary.
AI can strengthen that in three specific ways:
- Richer red-team behavior models: not perfect prediction, but more plausible adaptation than scripted injects
- Faster adjudication loops: more iterations per day means more learning per week
- Better pattern capture: AI can summarize what repeatedly caused Blue failure (late massing, mis-sequencing, logistics collapse, poor deception)
In other words, AI doesn’t need to “solve the war.” It needs to help teams run more reps against a thinking opponent.
Where AI actually helps operational art (and where it doesn’t)
AI is most useful when it reduces cognitive load and highlights relationships humans miss. It’s least useful when it tempts leaders to outsource judgment.
What to use AI for in operational planning
Used responsibly, AI can improve the quality of staff work in a China-relevant scenario by supporting five practical functions.
1) Terrain, basing, and access analysis at scale
In the Indo-Pacific, operational art is inseparable from geography. AI-assisted geospatial analysis can help teams rapidly identify:
- Key airfields and ports by throughput potential
- Chokepoints that shape maritime movement
- Sites that support distributed operations and deception
The value isn’t a “magic answer.” It’s speed and breadth—seeing more options early.
2) Center-of-gravity hypothesis testing
The best use of AI here is adversary system mapping: identifying dependencies (fuel, munitions, ISR links, command nodes, mobility corridors) and stress-testing what happens if certain nodes fail.
Think of it as:
AI to generate and test center-of-gravity hypotheses—humans to decide which one matters.
3) Sequencing and synchronization checks
AI can flag when a plan violates basic operational logic:
- You’re depending on an enabling action that occurs after the main effort
- Sustainment demand exceeds lift capacity for a given window
- Fires, cyber, and maneuver aren’t aligned in time and purpose
That’s not glamorous. It’s incredibly valuable.
4) Information management under degraded networks
In the wargame described, cognitive overload and proceduralism were major barriers. AI can help by:
- Summarizing ISR and operational reporting into decision-ready briefs
- Prioritizing anomalies and high-impact changes
- Caching and syncing data for distributed staffs when connectivity is intermittent
This is one of the most under-discussed AI advantages in national security: keeping staffs functional when the digital world is unreliable.
5) Course-of-action exploration, not course-of-action selection
AI can generate options and run simulations quickly. But commanders shouldn’t accept AI-selected COAs as “the answer.” The standard should be:
- AI proposes 3–6 distinct COAs with explicit assumptions
- The staff challenges the assumptions
- The commander selects based on intent, risk, and political context
What not to use AI for (especially against China)
AI should not be treated as an oracle for:
- Predicting adversary decisions with false precision
- Automating escalation control
- Replacing the human red team in wargaming
Peer competition punishes overconfidence. A slick model that hides uncertainty is worse than no model.
A practical framework: “Operational Art + AI” in three phases
Most organizations fail by trying to jump straight to fully automated planning. A better path is phased adoption tied to training outcomes.
Phase 1 (0–6 months): AI for clarity and tempo
Goal: reduce staff friction.
- AI summarization for ISR and logistics reporting
- Terrain and basing analytics
- Automated plan consistency checks
Measure success by time saved and fewer coordination errors.
Phase 2 (6–18 months): AI to improve operational design
Goal: improve linkages among ends, ways, and means.
- Center-of-gravity mapping tools
- Decisive point identification support (with human validation)
- COA exploration with assumption tracking
Measure success by whether plans show coherent sequencing and concentration of effects.
Phase 3 (18+ months): AI-enabled wargaming as a training engine
Goal: make operational art a muscle again.
- More frequent free-play wargames
- AI-supported adjudication and pattern analysis
- Red-team augmentation (not replacement)
Measure success by performance across repeated scenarios: faster adaptation, better massing, fewer “activity traps.”
The lead-gen reality: what decision-makers actually need right now
If you’re building or buying AI for defense planning, the near-term need isn’t flashy autonomy. It’s trusted decision support that survives contact with:
- classification constraints
- coalition interoperability
- degraded comms
- human skepticism
- messy data
Procurement teams should ask vendors (and internal builders) for proof in three areas:
- Explainability: can the system show why it recommended a decisive point or sequence?
- Resilience: does it work when connectivity drops and data is incomplete?
- Training integration: can it be used inside wargames and exercises, not just demos?
If an AI tool can’t live inside real operational reps, it won’t change outcomes.
Where this leaves the “AI in Defense & National Security” conversation
The operational art decline described in the wargame isn’t an abstract professional education debate. It’s a product risk for the entire AI defense ecosystem.
AI will be asked to support planning and command decisions in U.S.-China competition. That’s inevitable. The question is whether it will reinforce proceduralism or force clearer operational thinking.
If you’re serious about AI wargaming and AI-enabled mission planning, build toward this standard:
AI should make operational design easier to practice, easier to test, and harder to fake.
The next year of Indo-Pacific exercises and planning cycles is a chance to institutionalize that approach—before another wargame, or a real crisis, delivers the same lesson with higher stakes.
If you’re evaluating AI decision support for operational planning, what would you rather discover in a simulation: that your decisive points are wrong, or that your staff can’t explain them?