PacFleet’s push for speed is an AI use case. Learn how AI accelerates C2, autonomy, and concept development inside the weapons engagement zone.

AI-Driven Speed: What PacFleet’s Rush Teaches
Speed is now a fleet-level requirement, not a nice-to-have. When Adm. Steve Koehler described Pacific Fleet as “building the airplane while we fly it,” he wasn’t offering a clever metaphor—he was describing a deliberate operating model for the Indo-Pacific, where contested communications, long distances, and fast-moving threats punish slow decision cycles.
For leaders working in AI in defense & national security, PacFleet’s message is unusually clear: capability delivery and concept development have to happen in parallel. That’s exactly where AI belongs—not as a science project bolted onto legacy processes, but as the engine that compresses timelines for analysis, planning, testing, and field reconfiguration.
This post takes PacFleet’s urgency and turns it into a practical blueprint: where AI can accelerate modernization, what “good” looks like in a weapons engagement zone, and what defense organizations (and industry partners) should build next if they want to matter in 2026.
Why PacFleet is optimizing for speed (and why you should care)
PacFleet is optimizing for speed because the Indo-Pacific fight is defined by time-to-sense, time-to-decide, and time-to-act—often under degraded networks and with adversaries actively trying to blind, jam, and confuse.
Koehler’s focus on “operations inside the weapons engagement zone” implies two hard truths:
- You won’t get perfect information. Decision advantage comes from better cycles, not flawless data.
- You won’t have contractor-dependent logistics. Forces need to adapt systems during operations.
That’s why the PacFleet “rush” matters to anyone building AI-enabled defense capabilities. It forces a shift from:
- “Field a platform, then write doctrine” → “Evolve concepts while fielding capability”
- “Centralized authority for changes” → “Edge authority for configuration and repair”
- “AI for dashboards” → “AI for operational tempo and resilience”
If you’re a program office, a prime, a systems integrator, or a startup, the takeaway is blunt: AI that doesn’t survive real operational constraints won’t survive procurement.
Where AI actually accelerates modernization: three high-impact lanes
AI accelerates military modernization when it compresses cycle time in places humans can’t scale—pattern discovery, option generation, and decision support under uncertainty.
Here are three lanes where PacFleet’s direction maps cleanly to practical AI deployments.
1) Command and control: decision advantage beats “more data”
PacFleet is already using AI for data analysis and wants to expand AI to “enhance command control” and speed the action cycle between maneuver and fires. That should immediately steer teams toward C2 workflows, not just analytics tools.
High-value AI patterns for command and control include:
- Multi-INT correlation: fusing ISR, cyber indicators, space-based cues, and EW telemetry into coherent tracks
- Anomaly detection: spotting “something changed” faster than watchstanders can
- Course of action generation: producing ranked response options with assumptions and risks
- Decision traceability: capturing why a recommendation was made, with provenance and confidence
A practical design rule I’ve found useful: AI should shorten the time from first alert to commander action by minutes, not decorate a common operating picture. If it doesn’t change tempo, it doesn’t change outcomes.
2) Autonomous and unmanned systems: adaptation matters more than autonomy
PacFleet’s emphasis on operating inside the weapons engagement zone pairs naturally with unmanned maritime systems and distributed sensing. But the real differentiator won’t be “how autonomous” the vehicle is—it will be how quickly it can be re-tasked, reconfigured, and trusted.
AI can help unmanned systems in four concrete ways:
- Mission-level autonomy (bounded): execute a route, search pattern, or patrol plan with constraints
- Perception and classification: identify objects, behaviors, and maritime patterns at the edge
- Collaborative behaviors: swarm-like coordination where losing nodes doesn’t collapse the mission
- Deception resilience: detecting spoofing, sensor manipulation, and adversarial patterns
The procurement implication: organizations should prioritize AI-enabled mission reconfiguration and degraded-mode operation over flashy “full autonomy” demos that assume perfect comms.
3) Concept development: AI-enabled rehearsal is the new doctrine engine
PacFleet is pairing experimentation with rehearsal to build “new concepts of operation.” That’s a signal that the old model—publish doctrine, then train to it—can’t keep up.
AI can speed concept development through:
- Scenario modeling and simulation: generating thousands of plausible vignettes instead of a handful
- Wargame augmentation: surfacing tactics that humans miss, then letting humans validate them
- Red-team automation: faster, more consistent “enemy” behaviors across rehearsals
- After-action analytics: extracting what worked, what didn’t, and which variables mattered most
A simple, quotable way to frame it: The most valuable AI for operating concepts doesn’t predict the future—it stress-tests your assumptions at scale.
“Building the airplane while we fly it” only works with guardrails
PacFleet’s approach is right—but it’s also risky if teams confuse speed with recklessness. “Faster” is only an advantage if trust, safety, and cyber resilience keep pace.
Here are the guardrails that make rapid AI adoption viable in operational units.
Guardrail 1: Define decision rights—what AI can do vs. advise
The fastest programs clearly separate:
- Decision support: AI recommends; humans decide
- Decision automation: AI decides within strict constraints (time, geography, rules, confidence)
A workable model for contested environments is automation with abort authority: AI executes bounded actions, and humans can stop or override quickly.
Guardrail 2: Prove performance in degraded networks
Indo-Pacific operations assume contested comms. So AI has to work when:
- bandwidth is throttled,
- links drop,
- GPS is degraded,
- cloud access disappears.
That forces an architectural stance: edge AI with intermittent synchronization, not cloud-only inference.
Guardrail 3: Treat data as a weapons system
Operational AI fails most often for boring reasons: mislabeled data, broken pipelines, unclear permissions, and inconsistent formats across commands and allies.
If you want AI-enabled command and control to scale, you need:
- governed data products (who owns it, who certifies it, who can use it)
- mission-specific feature stores (not one monolithic “data lake”)
- logging and audit trails for models used in operational contexts
Guardrail 4: Bake cyber and adversarial ML into the test plan
An AI model that performs well in a lab and fails under deception is worse than useless—it creates false confidence.
Operational test plans should include:
- adversarial inputs (spoofed tracks, manipulated imagery, synthetic signals)
- model inversion and data leakage checks
- fallback modes when confidence drops below threshold
If a vendor can’t explain how their system behaves under active manipulation, they’re not ready for the weapons engagement zone.
The “right to repair” is an AI requirement, not a maintenance slogan
Koehler’s call for sailors to have the confidence and authority to install new parts—and to reconfigure unmanned systems “during the fight”—is one of the most important lines in the source.
Here’s why: AI-enabled systems are never really “finished.” They require updates to:
- models (new tactics, new signatures),
- data (new sensors, new allies),
- workflows (new concepts of operation),
- configurations (new payloads, comms plans, autonomy constraints).
If reconfiguration is contractor-gated, your operational tempo is contractor-limited.
Practical steps that support the right-to-repair in AI systems:
- Modular software components with clear interfaces (swap algorithms without rewriting the stack)
- On-platform test harnesses so crews can validate changes quickly
- Signed update packages for security without locking out operators
- Field-level configuration tools designed for sailors, not software engineers
This is also where acquisition gets real: programs should require operator-configurable autonomy profiles as a deliverable, not an afterthought.
What leaders should do next: a 90-day plan for AI-enabled speed
If you’re trying to align to PacFleet’s direction—whether you’re inside government or supporting it—this is a realistic 90-day plan that produces evidence, not PowerPoint.
- Pick one operational bottleneck (e.g., track correlation, target queueing, tasking unmanned sensors).
- Define a measurable tempo metric (minutes saved, handoffs reduced, false alarms reduced).
- Run a degraded-mode trial (no cloud dependency, limited bandwidth, intermittent comms).
- Instrument everything (logs, confidence, overrides, decision latency).
- Red-team the model (spoofing, adversarial examples, operator misuse).
- Deliver a “field kit” (training, configuration guide, rollback plan, support concept).
Most organizations get stuck because they try to start with a grand architecture. Start with one choke point, prove speed, then expand.
The bigger series theme: AI is becoming the operating system for deterrence
Across the AI in Defense & National Security series, one theme keeps showing up: AI isn’t just a tool for intelligence analysis or cybersecurity. It’s becoming the operating system that connects sensing, decision-making, and action across domains.
PacFleet’s posture makes that tangible. When speed, persistence, and weapons engagement zone operations become the baseline, AI stops being optional—and starts being the only scalable way to keep commanders ahead of the problem.
If your organization is building AI for mission planning, autonomous systems, or command and control, the question to ask going into 2026 is simple: Does your AI reduce decision time in the worst conditions, or only in the best ones?
If you want to pressure-test an AI concept against real operational constraints—degraded networks, coalition data sharing, cyber threats, and the need for on-the-spot reconfiguration—I’m happy to compare notes on what an evaluation plan should look like and what to demand from vendors before you commit.