AI Mission Command: Beating China’s Multi-Domain Playbook

AI in Defense & National Security••By 3L3C

Multi-domain precision warfare bets on centralized coordination. AI-enabled mission command helps defenders decide faster, disrupt kill chains, and win the timing fight.

AI mission commandPLA doctrinemulti-domain operationswargamingjoint firesTaiwan Strait
Share:

Featured image for AI Mission Command: Beating China’s Multi-Domain Playbook

AI Mission Command: Beating China’s Multi-Domain Playbook

Centralized control is the hidden tax in China’s preferred way of fighting. You can wire every sensor to every shooter, fuse a mountain of data, and still lose the reaction race if decisions bottleneck at the top.

That’s what makes a surprising source worth paying attention to: a Chinese-designed commercial wargame called The Coming Wave (明日浪潮). It’s not an intelligence product, but it’s a cultural artifact—an interactive “feel” for how some Chinese planners and academics think modern war should work. And it inadvertently highlights a weakness defenders can plan around.

This post sits in our AI in Defense & National Security series for a reason: AI won’t matter most as a better sensor. It’ll matter most as a mission command accelerator—helping smaller teams decide and act faster than a centralized opponent can respond.

What China’s “multi-domain precision warfare” is really optimizing

China’s core operational concept—often translated as multi-domain precision warfare—optimizes for one thing: finding critical vulnerabilities in an adversary’s operational system and striking them quickly and in combination across domains.

In plain terms, the concept assumes three steps:

  1. Build information advantage through a connected command, control, communications, intelligence, surveillance, and reconnaissance architecture.
  2. Identify key nodes in the opponent’s system (C2, air defense, logistics, ISR, long-range fires, political decision links).
  3. Mass “precision effects” across air, sea, land, cyber, and the electromagnetic spectrum to collapse the opponent’s ability to coordinate.

The wargame framing is useful because it forces an operational question many briefings dodge: What does your doctrine reward people for doing under pressure?

In The Coming Wave, doctrine rewards:

  • treating units as sensors first
  • prioritizing detection-to-strike chains over platform-to-platform duels
  • using joint fires as the primary damage mechanism
  • valuing informatization (network integration) as the core determinant of effectiveness

That worldview is coherent. It’s also brittle if the opponent can disrupt targeting, degrade networks, and—most importantly—make high-quality decisions at lower echelons faster than Beijing expects.

System-vs-system thinking: “platforms are nodes”

A standout idea in the wargame’s mechanics: naval combatants are less differentiated by “magazine depth” or bespoke weapons, and more by detection and sensing contribution. The implicit message is strategic: a ship is a node in a kill chain before it’s a shooter.

For U.S. and allied planners, that has a practical implication: high-value targets may not be the “shiniest” platforms. The priority should shift toward:

  • data relay points
  • command nodes
  • key emitters and network bridges
  • ISR fusion centers
  • the organizational glue that turns “1+1” into “1+1>2”

In an AI-enabled conflict, this becomes even sharper: whoever protects and regenerates their decision network the fastest wins the second and third order fight.

Why wargames reveal doctrine faster than white papers

Wargames expose assumptions because they force tradeoffs. You can’t claim “we’ll be joint, integrated, and fast” without paying costs somewhere: authority, logistics, comms resilience, training time, or risk tolerance.

Commercial wargames add another lens: they show what a broader ecosystem finds plausible and teachable. In China’s case, the wargaming community has documented ties to military education and competitions, which makes a modern-conflict title like The Coming Wave more than entertainment.

Here’s what I find most useful about using a wargame (commercial or professional) to understand adversary doctrine:

  • It compresses complexity into repeatable decisions. You can run “what if” drills quickly.
  • It exposes timing problems. Not just what you can do, but when you can do it.
  • It clarifies what breaks the model. Jamming, deception, and initiative often reveal the seams.

That last point matters for AI in defense: AI-enhanced simulation is only valuable when it is adversarial, messy, and honest about uncertainty. If your models assume perfect connectivity and obedient subordinates, you’re training for the wrong war.

Wargames as AI training grounds (and audit tools)

Most defense teams talk about AI as “decision support.” That’s true but incomplete. In practice, AI becomes:

  • a staff multiplier for targeting and collection management
  • a pattern engine for sensor fusion and anomaly detection
  • a course-of-action generator under severe time constraints
  • a risk calculator for delegated authorities

Wargames—especially those focused on kill-chain timing—are an ideal environment to test whether those capabilities actually shorten the loop.

If you can’t show that your AI reduces time-to-decision under degraded comms, you haven’t built mission command tech. You’ve built a dashboard.

The core weakness: centralized decision authority can’t keep up

The most actionable takeaway from the The Coming Wave lens is blunt: you can’t fully compress an OODA loop if information travels quickly but decisions still have to travel upward.

China’s doctrine aims to speed information flow and coordination. But cultural and political realities—party control, risk aversion, top-down approval—make widespread mission command difficult to institutionalize.

That creates a predictable vulnerability: initiative at the edge.

In the wargame framing, less “informatized” forces struggle to maneuver and react, because their ability to act is constrained by the system. In real operations, initiative can substitute for connectivity more than many planners admit.

History reinforces the point. Large amphibious operations succeed because junior leaders solve local problems without waiting. Normandy’s landings are a classic example: units that were disorganized still created momentum because small teams acted on intent.

For Taiwan and other partners facing a fast-moving opening salvo, this isn’t academic. A defender that can:

  • disperse
  • conceal
  • reconstitute
  • improvise fires and obstacles
  • maintain local decision authority

…can impose delay and uncertainty that centralized attackers hate.

What “mission command” means in an AI era

Mission command isn’t slogans about empowerment. It’s a design requirement for organizations and systems.

An AI-enabled mission command stack should do at least four things under stress:

  1. Translate commander’s intent into machine-readable constraints (what you can do, what you can’t, what requires approval).
  2. Recommend actions that fit local context (terrain, force posture, rules of engagement, collateral risk).
  3. Enable secure delegation (who can call which fires, when, with what confidence threshold).
  4. Keep operating when cut off (local inference, degraded comms modes, on-device models).

The goal isn’t “let the AI decide.” The goal is let humans decide lower, faster, and with better context.

How AI can outmaneuver multi-domain precision warfare

If multi-domain precision warfare depends on integrated detection-to-strike chains, then the counter is disrupting those chains while accelerating your own local decisions. AI helps on both sides of that equation.

1) Build resilient, delegated kill chains

Defenders often overcorrect after a targeting mistake by centralizing approvals. That feels safe. It’s also slow.

A better approach is tiered delegation, where:

  • pre-cleared target sets are pushed down
  • authority expands as comms degrade
  • AI continuously checks proposed fires against constraints

Practical mechanisms (policy and tech together):

  • mission-type orders + preplanned authorities (who can strike what)
  • confidence-based routing (high-confidence targets can be approved locally; ambiguous ones escalate)
  • automatic deconfliction aids (airspace, maritime corridors, no-strike lists)

This is where AI decision support becomes operationally decisive: it reduces the staff work that normally forces commanders to centralize.

2) Attack the “node-first” worldview with deception and EW

If the adversary treats platforms as sensing nodes, then sensor credibility becomes a center of gravity. AI-enabled deception increases the cost of trust.

Concrete examples of where AI can raise uncertainty:

  • multi-sensor decoys that create consistent signatures
  • anomaly generation that floods analysts with “plausible” tracks
  • emitter mimicry to trigger false targeting cycles
  • rapid pattern shifts that break model priors

The desired effect isn’t permanent blindness. It’s timing disruption: forcing re-checks, re-approval, and higher-echelon involvement.

3) Use AI to shorten decision cycles, not just improve intelligence

Most teams measure AI success with accuracy metrics. In conflict, the metric that matters is:

Time from detection to authorized action under uncertainty.

If AI improves identification by 5% but adds a human review layer that costs 10 minutes, you’re behind.

AI mission planning should be evaluated in wargame-like conditions:

  • comms intermittently down
  • contested GPS
  • incomplete tracks
  • collateral constraints
  • fast target decay

The test is simple: did you act faster with acceptable risk, or did you just produce nicer slides?

4) Make “informatization” survivable: degrade gracefully

China’s doctrine emphasizes informatization as the path to effectiveness. A smart defender assumes heavy disruption and builds graceful degradation:

  • local maps and cached tracks
  • mesh networking across small units
  • store-and-forward messaging
  • on-device models for classification and routing

This supports mission command culturally and technically. When comms break, units don’t freeze.

A practical playbook for planners and security leaders

This topic attracts two kinds of readers: operators and builders. Both can use the same checklist.

Here’s what works when you’re preparing for an adversary that bets on centralized coordination.

For operational planners (and PME teams)

  • Run wargames that punish centralization. Force approval bottlenecks and see what breaks.
  • Push authorities down on day zero. Don’t wait for “the network to fail” to practice.
  • Measure cycle time ruthlessly. Track detection-to-decision-to-effects by echelon.
  • Practice operating “wrong.” Train for landing off-target, comms lost, and partial information.

For AI and defense technology teams

  • Design for delegation, not dashboards. The UI should support decisions, not reporting.
  • Build constraint engines. Codify intent, legal boundaries, and policy controls.
  • Prioritize offline-first capability. Assume denied, degraded, intermittent, limited bandwidth.
  • Test against deception. If your model can’t handle adversarial inputs, it will fail in the first week.

Where this leaves U.S. allies—and the AI opportunity

The Coming Wave is valuable because it makes a strong claim in playable form: modern war is a contest of connected systems, dominated by detection and strike. That’s directionally right.

But it also reveals the fault line: centralized doctrine struggles when the opponent decentralizes decisions and survives disruption. If China’s approach depends on speeding information to senior decision-makers, then mission command—supported by AI systems built for delegated authority—turns speed into a durable advantage.

For readers following our AI in Defense & National Security series, the next step is practical: audit your AI roadmap against mission command reality. Ask whether your tools help junior leaders act within intent when comms fail, targets decay, and ambiguity is high.

If your adversary’s doctrine is optimized for orchestration, the best counter is competent initiative—backed by AI that makes delegation safer, faster, and measurable. When the next wargame (or the next crisis) starts, who’s empowered to act first?