AI wargaming is exposing a key PLA weakness: centralized decisions. Learn how simulations can stress-test Chinese doctrine and improve mission planning.
AI Wargaming: What Chinese Doctrine Signals in 2026
A commercial tabletop wargame isn’t supposed to be a window into a military’s future. Yet that’s exactly why The Coming Wave has gotten attention in defense circles: it’s a Chinese-designed game that tries to model how the People’s Liberation Army (PLA) expects to fight—fast, joint, and information-driven.
Here’s the part that matters for the AI in Defense & National Security conversation: when a doctrine is built around connecting sensors, decision-makers, and shooters into one network, AI becomes the glue. Not because a board game has AI inside it, but because the mindset behind it maps cleanly onto AI-enabled mission planning, predictive analytics, and decision support systems.
I’ll take a clear stance: U.S. and allied planners should treat doctrine-shaped commercial wargames as a low-cost “doctrine usability test,” then use AI-enabled simulation to pressure-test the weak points—especially centralized decision-making.
Why a tabletop wargame is suddenly relevant to AI in defense
The direct answer: tabletop wargames often reveal what a military thinks is “decisive,” and those assumptions drive what AI systems get built, funded, and fielded.
The RSS article describes The Coming Wave as an interactive interpretation of China’s core operational concept: multi-domain precision warfare. That concept prioritizes identifying an adversary’s vulnerabilities quickly, then coordinating joint strikes across air, sea, land, cyber, and the electromagnetic spectrum.
If you strip away the jargon, the idea is simple:
- Find the right target faster than your opponent
- Decide faster than your opponent
- Strike faster than your opponent
- Repeat before they recover
That loop is exactly where modern AI-driven wargaming and AI decision support fit. AI isn’t magic; it’s a way to compress time in analysis-heavy steps: sensor fusion, pattern detection, course-of-action generation, and “what happens next” estimation.
The hidden connection: doctrine → models → AI requirements
Defense organizations don’t build AI systems in a vacuum. They build them to support a theory of victory.
A doctrine like multi-domain precision warfare implies specific AI and data requirements:
- Sensor fusion that works across services and classification boundaries
- Targeting analytics that prioritize nodes, not just platforms
- Network resilience modeling (what breaks first, and what still functions)
- Decision latency measurement (how long from detection to strike authorization)
A tabletop game that repeatedly rewards “detect then strike” teaches players—implicitly—what matters most. That’s valuable signal for analysts trying to understand how another military conceptualizes modern war.
What “multi-domain precision warfare” really optimizes for
The direct answer: it optimizes for coordinated effects through a network—treating units as sensors and nodes first, and shooters second.
One of the most revealing points from the source article is how The Coming Wave models naval power. Instead of differentiating ships mainly by their strike capacity (magazine depth, weapon mix), it differentiates them by detection and networking value.
That’s a very “system-of-systems” worldview:
- A destroyer isn’t only a weapons truck.
- It’s a moving sensor, a relay, and a contributor to the targeting picture.
- If it helps the force see better, the force kills better.
This matters because U.S. planning culture often treats certain platforms as “must-kill first” due to their multi-mission capability. The Chinese framing (as portrayed here) nudges you toward a different prioritization: kill the information nodes and synthesis points, not only the flashiest platforms.
AI implication: node-hunting becomes an algorithmic competition
Once both sides accept that war is “system on system,” the competition shifts toward:
- Discovering which nodes matter most (communications hubs, data fusion centers, relay aircraft, key ISR links)
- Predicting how redundancy and workarounds preserve the kill chain
- Estimating time-to-recover after disruption
These are ideal problems for AI-enabled simulation and graph analytics:
- Treat the opponent’s operational system as a graph
- Score nodes by centrality and fragility
- Simulate degradation under attack
- Recommend disruption packages that maximize delay and confusion
If your wargames don’t model that, you’re training your people on yesterday’s problem.
The real weakness: speed isn’t just sensors—it’s authority
The direct answer: centralized command slows the OODA loop even when the network is fast, and AI can’t fully compensate for that.
The RSS piece lands on a core critique: the PLA’s top-down control culture limits how much they can shrink their decision cycle. You can accelerate the flow of information, but if decision rights stay high, the loop still bottlenecks.
This is where Western mission command (decentralized execution aligned to commander’s intent) becomes more than a leadership philosophy. It becomes a systems advantage.
Here’s the one-liner I keep coming back to:
If decisions are centralized, the network becomes a funnel.
AI makes this tension sharper. A highly instrumented, AI-supported force can generate more detections, more alerts, and more targeting options than a central staff can responsibly approve in time. Past a threshold, “more information” becomes more queue time.
AI implication: decision-support must match decision-rights
Many AI programs in national security focus on better predictions and better targeting. Fewer focus on the organizational question: who is allowed to act on the model output?
If an AI system flags a fleeting maritime target with a 6-minute window, but authorization takes 20 minutes, the model is academically correct and operationally irrelevant.
Practical design rule for AI in mission planning:
- Don’t deploy AI recommendations at echelons that can’t act in time.
That usually means pushing certain analytics closer to the edge—paired with rules, constraints, and auditability so you can trust decentralized execution without inviting chaos.
A Normandy lesson that still applies to AI-era warfare
The source article references D-Day as a case where mission command mattered because junior leaders adapted under extreme friction.
That’s not nostalgia; it’s a reminder that friction is guaranteed in contested communications. AI can improve plans, but mission command determines whether the plan survives contact.
In a Taiwan Strait scenario, expect:
- degraded comms
- spoofed tracks
- disrupted positioning
- ambiguous attribution in cyber/electromagnetic actions
A force that waits for perfect clarity from higher headquarters will lose time it can’t get back.
How to use AI-enabled wargaming to stress-test Chinese doctrine
The direct answer: pair “doctrine-revealing” games with AI simulation to quantify bottlenecks, then build concepts that exploit them.
A tabletop wargame is great for surfacing assumptions, but it’s limited in scale and repetition. AI-enabled simulation fills that gap by running thousands of variations and highlighting what consistently drives outcomes.
Here’s a practical workflow I’ve seen work (and it maps well to the article’s recommendation to experiment with rule modifications):
1) Extract the doctrine assumptions
From The Coming Wave and open-source doctrine discussion, you can encode assumptions like:
- joint strike availability and responsiveness
- dependency on shared targeting data
- informatization (network integration) as a multiplier
- centralized decision gates
The goal isn’t perfect truth. It’s a testable model of belief.
2) Build “decision-latency” into the simulation
Most wargames model weapons ranges and probabilities. Fewer model authorization delay.
Add variables like:
- time to validate target
- time to route approval
- time to allocate fires
- time to retask sensors
Then run Monte Carlo-style experiments.
What you’re looking for is not a single outcome, but a pattern:
- At what decision-latency does the strike system collapse into missed opportunities?
- Which echelon creates the largest queue?
- Which disruptions (EW, cyber, kinetic) increase latency the most?
3) Stress the network, not just the platforms
If the opponent treats ships, aircraft, and brigades as sensors first, then the fight becomes about:
- track quality
- track continuity
- data-sharing reliability
AI-enabled scenario modeling can simulate sensor degradation and deception at a level tabletop play can’t.
4) Test mission-command “rules of engagement” as a variable
The article proposes modifying rules to reflect a more mission-command-oriented force.
Do it—but make it measurable:
- How many decisions are delegated?
- Under what constraints?
- What’s the error rate?
- What’s the speed gain?
This produces a concrete planning output: a delegation policy that trades control for tempo in a controlled way.
What defense teams should do in 2026 planning cycles
The direct answer: treat AI wargaming as a procurement-adjacent discipline—because it tells you what to buy, how to organize, and where AI is actually operational.
If you’re responsible for capability development, operational planning, or defense innovation, three moves are worth prioritizing:
-
Build a “kill chain latency budget.”
- Define the maximum allowable time from detection to effect for top mission threads.
- Use it to judge C2 processes and AI tools.
-
Invest in AI-ready data plumbing, not just models.
- Multi-domain precision warfare is a data architecture problem as much as a weapons problem.
- If your data can’t move, your AI can’t matter.
-
Train delegation under uncertainty as a core competency.
- Mission command isn’t a slogan; it’s practiced behavior.
- AI can support it, but only if people are trained to act on imperfect information.
A seasonal note for December planning: most organizations are locking Q1 priorities right now. If AI wargaming is still treated as an experiment sitting off to the side, it’ll stay off to the side all year.
Where this fits in the “AI in Defense & National Security” series
The direct answer: AI wargaming is where surveillance, intelligence analysis, cyber operations, and mission planning collide—and where organizational culture gets tested.
This post isn’t really about a board game. It’s about what the board game suggests: a PLA theory of victory that prizes networked detection and synchronized strikes, and a potential Achilles’ heel when fast networks meet slow authority.
If you’re building AI for national security, don’t stop at “Can the model predict?” Ask: Can the organization decide in time to use the prediction?
If you want help translating doctrine into an AI-enabled wargaming and simulation program—one that produces measurable outputs like latency budgets, delegation policies, and node-defense priorities—this is exactly the kind of work my teams get asked to support.
What would you change first in your current wargame: the sensors, the strikes, or the decision rights?