AI Fire Control: Fixing Air Defense Manning Gaps

AI in Defense & National SecurityBy 3L3C

AI fire control can reduce air defense staffing strain—if it handles real-time uncertainty, cyber risk, and fast human handoffs. Here’s what works.

AI-enabled fire controlair and missile defenseartillery modernizationhuman-machine teamingmilitary decision supportdefense acquisition
Share:

Featured image for AI Fire Control: Fixing Air Defense Manning Gaps

AI Fire Control: Fixing Air Defense Manning Gaps

Artillery and air defense units don’t lose time in minutes. They lose it in seconds. And seconds are exactly what the U.S. Army is trying to buy back by pushing AI deeper into fire control and engagement operations centers.

At the AUSA annual meeting, Maj. Gen. Frank Lozano (Program Executive Office Missiles and Space) laid out a blunt reality: the Army wants AI to help run artillery and air and missile defense operations—especially to reduce the manpower footprint and the cognitive load on crews—but the technology “is nowhere near what it needs to be” for the kind of spatial reasoning and real-time situational awareness these missions demand.

This post sits squarely in our AI in Defense & National Security series because it captures the real work of operational AI: not flashy demos, but systems that have to survive messy data, adversarial deception, cyber risk, and the hard requirement that humans remain accountable for lethal decisions.

Why the Army is pushing AI into artillery and air defense

The core driver is simple: the volume and speed of modern threats outpace human staffing models.

Air and missile defense increasingly means tracking many targets at once—cruise missiles, ballistic missiles, drones, loitering munitions, decoys, and mixed salvos designed to confuse sensors and saturate defenders. Artillery and long-range fires face their own version of this: more sensors, more target feeds, more potential aim points, and tighter timelines.

Lozano’s framing is the right one: the enemy won’t attack the way we planned. That’s not rhetoric—it’s an operational pattern we’ve watched repeatedly in recent conflicts where mass, deception, and cheap attritable systems are used to force defenders into decision overload.

The real bottleneck: human attention, not compute

Most organizations hear “AI in fire control” and assume it’s a targeting math problem. The math is mature. The bottleneck is the human team trying to:

  • Fuse sensor inputs that arrive late, incomplete, or contradictory
  • Keep track of what’s already been engaged and what’s still leaking through
  • Coordinate between shooters, radars, command posts, adjacent units, and higher HQ
  • Maintain rules of engagement and positive identification while everything is moving

AI’s most immediate value in these formations is attention management. If a system can triage tracks, flag anomalies, propose engagement sequences, and keep a clean operational picture—humans can make fewer decisions, but better ones.

What “AI-enabled fire control” actually means (and what it doesn’t)

AI-enabled fire control means software that helps crews detect, classify, prioritize, and recommend actions faster than humans can do manually—while keeping humans responsible for the final decision.

That definition matters because it separates realistic near-term deployments from sci‑fi autonomy. Today’s push is less “the AI fires the missile” and more:

  • AI helps build and maintain the common operational picture
  • AI recommends who should shoot what, when, given inventories and geometry
  • AI highlights uncertainty and requests additional sensing
  • AI reduces repetitive console work so fewer operators can run the same mission

A practical example: the “engagement operations center” problem

Engagement operations centers (EOCs) are where the fight becomes a workflow. Tracks come in. Correlations are made. Threats are prioritized. Engagement authority is confirmed. Weapons are assigned. Deconfliction happens. Battle damage assessment begins.

In an EOC, AI can support tasks that look mundane but decide outcomes:

  • Track correlation: deciding whether two sensor detections are the same object
  • Threat scoring: ranking based on trajectory, speed, type, target value, and confidence
  • Engagement sequencing: deciding the order of shots to maximize probability of kill under limited interceptors
  • Resource management: preventing “gold-plating” (using a high-end interceptor on a low-end drone)

If you’ve ever watched a team handle a surge of alerts, you know why this matters: humans don’t scale linearly under stress. Past a threshold, more alerts produce worse decisions.

The hard part: real-time situational awareness and spatial reasoning

Lozano’s caution about current language models not doing spatial reasoning is well-placed, even if the broader point isn’t limited to LLMs.

Fire control and air defense require grounded, real-time, geometry-heavy reasoning:

  • Where is the target now, and where will it be?
  • What does each sensor actually see (and what are its blind spots)?
  • Which shooter has the right kinematics and timeline?
  • What’s the risk to friendly aircraft, civilian air routes, or adjacent fires?
  • How do we handle decoys and emissions control?

This is why the most credible architectures pair multiple AI techniques:

  • sensor fusion models for track management
  • anomaly detection for spoofing/jamming indicators
  • optimization for weapons-tasking
  • human-machine interface design to keep operators oriented
  • and, yes, language interfaces—but mostly for summarization and workflow assistance, not geometry

“Human in the loop” isn’t a slogan—it’s a systems requirement

The debate about human in/on/out of the loop often turns philosophical. For defense acquisition and operational commanders, it’s concrete:

  • Legal accountability: humans must remain responsible for lethal decisions
  • Operational resilience: crews must fight through degraded AI or comms loss
  • Adversarial reality: the enemy will try to manipulate model behavior

A workable near-term posture is what Lozano hinted at: AI runs continuous surveillance and proposes actions; humans approve, supervise, and can take over quickly.

That last clause—take over quickly—is where many programs stumble.

The hidden risk: “minimal manning” can create a new failure mode

Reducing manpower footprint is attractive. It’s also dangerous if it’s done before the human-machine system is ready.

Here’s the failure mode: AI carries the operational picture for hours, then a human has to re-enter the loop during a fast-changing engagement. If the interface doesn’t provide instant, trustworthy context, you get hesitation, wrong overrides, or blind acceptance.

A strong design principle for AI in air defense and long-range fires is:

If a human must be accountable, the system must be explainable at combat speed.

Not “explainable” in a research sense. Explainable as in: a crew member can look at the screen and immediately understand:

  • what the AI thinks is happening
  • how confident it is
  • what it recommends
  • what constraints it considered
  • what it might be missing

What good human-machine teaming looks like

In my experience, the best operational AI doesn’t try to sound smart. It tries to be useful.

That usually means:

  1. Tiered recommendations (high-confidence vs. needs confirmation)
  2. Confidence with provenance (which sensors and timestamps drove the conclusion)
  3. Fast rollback (operators can revert to a simpler mode when things get weird)
  4. Training mode parity (the system behaves the same in exercises as in operations)

If vendors can’t show this in realistic scenarios, “minimal manning” becomes “minimal margin for error.”

What industry can build now: a near-term roadmap that actually ships

The Army’s ask to industry—close the gap between what’s possible and what exists—should be interpreted as a call for incremental operational wins, not a single monolithic AI brain.

Here’s a roadmap that can deliver value within typical program cycles.

1) Start with decision support, not autonomy

First deployments should focus on triage, prioritization, and summarization.

Concrete deliverables:

  • automatic track de-duplication and alert reduction
  • recommended engagement sequences with constraints visible
  • “what changed in the last 60 seconds” summaries for shift turnover

These are high-impact and don’t require handing lethal authority to the model.

2) Engineer for contested data from day one

Operational systems won’t get clean data. They’ll get:

  • jamming, spoofing, intermittent sensor dropouts
  • degraded comms and time sync issues
  • conflicting reports across echelons

AI for national security has to be built with degradation behavior specified upfront:

  • When confidence drops below X, what does the system do?
  • How does it signal uncertainty?
  • What’s the safe mode?

3) Treat cybersecurity as part of the model, not an add-on

AI-enabled fire control expands the attack surface. That’s unavoidable. The only question is whether programs bake in protection early.

High-priority security requirements include:

  • model and data supply-chain controls (who touched the training data and updates)
  • hardening against data poisoning and prompt manipulation
  • audit logging for recommendations and operator actions
  • network segmentation and zero-trust patterns around sensor feeds

For the AI in Defense & National Security audience, this is the bridge point that gets overlooked most: AI modernization and cybersecurity modernization are the same project.

4) Prove it in exercises that stress the human team

If your test event doesn’t overload operators, it won’t predict combat.

Good evaluation looks like:

  • mixed salvos and decoys
  • sudden comms degradation
  • simultaneous higher-HQ tasking and local engagement demands
  • long-duration operations that test fatigue and shift turnover

The metric shouldn’t just be “probability of kill.” It should also include:

  • time-to-decision under load
  • operator error rate
  • alert volume reduction
  • recovery time when the AI is wrong

“People also ask”: common questions about AI in air defense

Will AI replace air defense soldiers and artillery crews?

No. The Army’s stated aim is reducing cognitive load and manpower footprint, not eliminating people. Crews still own the mission, and humans remain accountable for lethal force.

Why can’t current AI just do this already?

Because operational fire control requires real-time, spatially grounded reasoning with adversarial interference. Most commercial AI is trained and validated for different conditions.

What’s the fastest way to get value from AI in fire control?

Deploy decision support first: better track management, prioritization, and recommended engagement options, paired with interfaces that keep humans oriented.

What this signals for 2026 defense AI programs

The Army’s message is both ambitious and refreshingly honest: they want AI to help man artillery and air defense units, but they’re not pretending current tech is ready to run the fight unattended.

For program offices, primes, and non-traditional vendors, the opportunity is clear: build systems that make crews faster without making them dependent. That means AI that is robust under attack, transparent under pressure, and designed around how humans actually operate at 2 a.m. on hour twelve of a shift.

If you’re working on AI for defense operations—fires, air and missile defense, ISR fusion, or mission planning—the next step is to pressure-test your approach against three questions: What happens when the data lies? What happens when the network breaks? What happens when a new operator takes over mid-fight?

Those answers are where leads turn into deployments.

🇺🇸 AI Fire Control: Fixing Air Defense Manning Gaps - United States | 3L3C