AI-Driven Army Training: What JPMRC Proves

AI in Defense & National Security••By 3L3C

JPMRC tested 75 technologies to speed decisions in contested warfare. Here’s what it reveals about AI, C2, drones, and real-world military readiness.

Army modernizationMilitary AICommand and controlDrones and counter-UASIndo-Pacific securityDefense innovation
Share:

Featured image for AI-Driven Army Training: What JPMRC Proves

AI-Driven Army Training: What JPMRC Proves

A two-week Army exercise in Hawaii ran 75 technology experiments across every U.S. service branch plus seven partner nations. That’s not a “cool demo.” That’s a signal that the U.S. military is treating modern warfare like a software problem: test, iterate, ship improvements, repeat.

The setting matters. The Joint Pacific Multinational Readiness Center (JPMRC) is built around the Indo-Pacific’s hardest realities—long distances, contested communications, maritime threat arcs, and multi-domain effects. And the Army’s most tech-forward formations—like the 25th Infantry Division—are using this environment to stress the full stack: drones, rockets, electronic warfare, command-and-control, logistics power, and the human decision cycle.

For leaders working in AI in defense and national security, this exercise is a clean case study in where AI fits (and where it breaks): not as a single “autonomy” feature, but as the connective tissue between sensing, decisions, and fires—under pressure.

JPMRC shows the real point of “AI in training”: compressing the decision cycle

The headline isn’t that units brought drones or new rocket systems. The headline is that JPMRC is being used to shorten the time from observation to action, even when the network is degraded and the fight is spread across islands.

In the exercise scenario, soldiers defended an archipelago and attempted to retake islands seized by an opposing force. That framing is basically a requirements document for modern military AI:

  • Sense at range (unmanned reconnaissance, surveillance, and targeting)
  • Fuse information fast (shared picture across echelons)
  • Decide with incomplete data (human judgment under uncertainty)
  • Deliver effects (fires, EW, counter-UAS)
  • Sustain the force (power, comms, logistics)

When Army Vice Chief Gen. James Mingus talks about the network failing the moment units “cross the line of departure,” he’s describing a classic AI-and-C2 problem: the model is only as useful as the system that delivers it to the operator in time.

A useful way to describe AI in contested operations: “If it arrives late, it’s not intelligence—it’s history.”

Why “multi-domain” training is an AI requirement, not a buzzword

Maj. Gen. Jay Bartholomees emphasized that land forces have to train against threats that include naval forces, air, and long-range fires. From an AI perspective, multi-domain isn’t an organizational concept—it’s a data problem.

If your AI-enabled decision support tool can’t incorporate:

  • maritime tracks and air defense status,
  • electronic attack and jamming conditions,
  • friendly positioning and logistics constraints,

…then it will produce recommendations that are technically impressive and operationally wrong.

“Transformation in Contact” is an operating model—and it favors AI teams that can iterate

The Army’s Transformation in Contact initiative is the closest thing the service has to an institutional admission that traditional modernization timelines are too slow. Instead of waiting years for perfect requirements, it’s pushing equipment and software into operational units, gathering feedback, and fixing problems in weeks.

This changes who wins in defense tech.

If you build AI for national security, the winners won’t just have better models. They’ll have:

  • fast integration into messy networks,
  • training pipelines for soldiers,
  • clear fallback modes when systems fail,
  • and the ability to adjust to doctrine and policy friction.

A detail from the exercise is telling: new technology sped up targeting transmissions, but approvals sometimes still took around an hour because leaders were reluctant to assume risk on fires. Mingus called it a case of inserting tech without updating the process.

That’s the real battlefield problem:

  • AI can reduce sensor-to-shooter time, but
  • policy, authorities, and trust determine whether time is actually saved.

A practical lens: the three “clocks” that decide whether AI helps

If you’re evaluating AI-enabled military systems, track these three clocks:

  1. Compute clock: How fast can the system process, fuse, and recommend?
  2. Network clock: How fast can the result reach the right person—reliably?
  3. Authority clock: How fast can a human approve, coordinate, and execute?

Most programs obsess over clock #1 and then wonder why nothing changes in the field.

Drones, loitering munitions, and launched effects: AI’s job is orchestration

JPMRC highlighted a future where divisions employ a mix of:

  • traditional tube artillery,
  • rockets like HIMARS,
  • loitering munitions,
  • one-way attack drones,
  • reconnaissance drones,
  • spoofing and electronic warfare drones.

This isn’t a single weapon shift. It’s a portfolio shift, and it creates a control problem: how do you coordinate dozens (or hundreds) of semi-expendable systems across terrain, weather, and jamming?

AI’s most valuable contribution here isn’t “a drone that flies itself.” It’s orchestration:

  • recommending tasking and routing under EW constraints,
  • deconflicting airspace and fires,
  • prioritizing targets dynamically,
  • predicting resupply and battery burn rates,
  • and detecting anomalies that indicate deception.

Mingus referenced the daily artillery reality seen in Ukraine—4,000–5,000 rounds of 155mm per day and 130,000–150,000 rounds per month. The uncomfortable implication: even with drones everywhere, industrial-scale munitions still matter, and AI has to help allocate scarce high-end effects without starving the basics.

“Fail fast” is not a slogan—it's how you find doctrine mismatches

25th ID’s artillery commander described a steep learning curve and the ability to “fail fast.” That’s exactly what you want when introducing AI-enabled targeting and fires support.

Because the biggest failures usually aren’t algorithmic. They’re mismatches between:

  • what the software assumes,
  • what doctrine permits,
  • what communications allow,
  • and what leaders will sign off on.

Next-generation command and control: the least flashy, most decisive AI battleground

The exercise points to a truth many procurement roadmaps underplay: command and control (C2) is where AI either becomes operationally decisive—or becomes a lab toy.

Mingus described an Army network that often doesn’t work well once units move. In contested environments, C2 has to hold up under:

  • intermittent connectivity,
  • jamming and spoofing,
  • bandwidth constraints,
  • and information overload.

That means AI-enabled C2 can’t rely on constant cloud access or perfect data. It must be designed for:

  • edge computing (local inference)
  • graceful degradation (useful even when partial)
  • data provenance (why you should trust the output)
  • human workflow alignment (how decisions actually happen)

Cognitive overload is the hidden enemy of AI-enabled operations

One brigade commander described “cognitive overload” from all the new systems—then pointed to a paper map as the last-resort truth source when power goes out.

That’s not anti-tech sentiment. It’s a warning: if AI increases the number of dashboards without reducing decision friction, it will slow the force down.

Here’s what works in practice:

  • Fewer alerts, better prioritized (operators can’t action 100 notifications)
  • Confidence and uncertainty shown clearly (not buried in menus)
  • Recommended actions tied to constraints (range, authorities, collateral risk)
  • Offline-first workflows (operate when disconnected)

The best military AI UI is the one that disappears until it’s genuinely needed.

What defense leaders should copy from JPMRC (even outside the Army)

JPMRC isn’t just a military training story. It’s an adoption playbook for AI in national security organizations that operate under high stakes.

1) Treat exercises as product validation, not public relations

The Army is using realistic, stressful training to expose failures early. For AI programs, that means testing:

  • model performance under missing data,
  • behavior under jamming and deception,
  • latency from sensor to operator,
  • and operator trust under time pressure.

2) Update policies and processes at the same pace as software

If approvals take an hour, it doesn’t matter if the model runs in one second. AI adoption requires a parallel modernization of:

  • authorities and escalation paths,
  • rules of engagement interpretations,
  • cross-domain coordination checklists,
  • and auditability requirements.

3) Build for “mixed fleets” and messy interoperability

The exercise included old gear, new gear, and partner-nation forces. That’s the reality. AI systems that assume uniformity will fail.

Design priorities should include:

  • modular data adapters,
  • open interfaces,
  • and the ability to operate with partial integration.

4) Make resilience a first-class metric

If soldiers need to pull out a paper map when the power goes out, your AI solution should include an equivalent fallback mode:

  • cached maps and last-known tracks,
  • local models for basic prioritization,
  • and a “degraded mode” that still supports mission command.

The lead question for 2026: can the force stay fast when everything is contested?

JPMRC is a glimpse of where AI in defense and national security is headed: continuous experimentation, rapid iteration, and a blunt focus on speed—thinking faster, sharing understanding longer, and acting before the adversary can adapt.

But there’s a hard trade hiding underneath the tech demos. The more sensors and effects you add, the more you risk overwhelming the humans who have to interpret, approve, and execute.

Organizations that get this right will treat AI as a discipline of decision engineering: aligning models, networks, authorities, and operator workflows into a single system that can fight through chaos.

If you’re responsible for AI strategy, acquisition, or operational integration, the question worth sitting with is simple: When your network degrades, does your AI make the team faster—or does it create one more thing to manage?