PacFleet is moving fast on AI, unmanned systems, and new operating concepts. Here’s how to field AI quickly without losing trust, safety, or readiness.

PacFleet’s AI Sprint: Faster Concepts, Safer Fielding
Adm. Steve Koehler didn’t sugarcoat what Pacific Fleet is doing: “We’re building the airplane while we fly it.” That line lands because it describes a shift many defense teams still resist—treating capability development as a continuous operational activity, not a slow pre-war project.
For the Indo-Pacific, that pace isn’t a nice-to-have. It’s survival. The operational reality is long distances, contested communications, and a weapons engagement zone where forces can’t assume sanctuary for logistics, maintenance, or decision-making. PacFleet’s message is blunt: speed, persistence, and the ability to operate inside the threat ring are the new baseline.
This post is part of our AI in Defense & National Security series, and PacFleet is a perfect case study. Not because “AI will solve everything,” but because AI is one of the only tools that can compress decision cycles, accelerate experimentation, and reduce the cost of being wrong—as long as you build the right guardrails.
Why PacFleet’s “build while flying” approach is rational
PacFleet is acting like a force that expects contact. That’s the point. When Koehler calls for rapid acquisition and new operating concepts at the same time, he’s recognizing a core truth: capability and concept are inseparable. If you field new unmanned systems, new networks, or AI-enabled tools without updating doctrine and training, you’re just buying complexity.
There are three drivers behind the urgency:
- The Indo-Pacific is a time-distance problem. Reinforcement and resupply take time; you must fight with what you brought.
- The threat is systems-level. It’s not one missile or one platform; it’s sensors + shooters + jammers + cyber.
- Tech cycles now outpace acquisition cycles. Commercial AI models, autonomy stacks, and sensors iterate in months, not programs-of-record timelines.
Koehler’s framing—capabilities “designed and built to enable operations inside the weapons engagement zone”—is essentially a requirements statement for resilient, degraded-mode operations: local autonomy, local repair, and local decision support.
The hidden implication: experimentation has become operational readiness
Most organizations treat experimentation as a side hustle—nice demos, pilot programs, and a slide deck. PacFleet is describing something harder: experimentation paired with rehearsal in exercises so new capabilities and new concepts get stress-tested where it counts.
In practice, that means:
- You don’t just test whether a model works—you test whether sailors will trust it under pressure.
- You don’t just validate a new unmanned system—you validate how it changes command-and-control.
- You don’t just add a data tool—you confirm it survives contested networks and messy inputs.
If you’re a defense leader or industry partner, the lesson is uncomfortable but useful: your “prototype” is now part of the readiness pipeline. Treat it that way.
Where AI actually fits: compressing the decision cycle without breaking trust
PacFleet is already using AI for data analysis and intends to expand AI use for command and control and faster maneuver-to-fires cycles. That ambition is reasonable—if AI is treated as decision support with measurable performance, not as a magic co-commander.
Here’s the clean way to say it:
In maritime operations, AI is most valuable when it reduces time-to-understanding, not when it replaces accountability.
AI for maritime C2: what “good” looks like
For command-and-control in contested environments, AI earns its keep in four places:
- Sensor fusion and track management (correlating radar, EO/IR, SIGINT, AIS, and acoustic cues)
- Anomaly detection (flagging unusual patterns across shipping, emissions, or movement)
- Course-of-action generation (presenting ranked options with assumptions and risks)
- Workflow automation (brief building, reporting, watch turnover, and knowledge retrieval)
The goal isn’t to “automate the commander.” The goal is to reduce cognitive load so humans can make better calls faster.
Speed vs. precision: how to avoid the false choice
Most companies get this wrong by treating speed and precision like a tradeoff you must accept. The smarter approach is to separate “fast” from “final.”
- Fast layer: AI produces an initial assessment quickly (with confidence ranges and missing-data flags).
- Verification layer: humans and secondary tools validate or override.
- Audit layer: decisions and model outputs are logged for learning and accountability.
This layered approach is how you move quickly and avoid reckless automation. It’s also how you build confidence over time—because sailors can see when the model is right, when it’s wrong, and why.
AI-driven operating concepts: model the fight before you rehearse it
PacFleet’s focus on “new concepts of operation” is where AI can be especially practical. Before you put ships, aircraft, and unmanned systems into an exercise, you want to explore thousands of plausible permutations cheaply.
AI-supported concept development usually starts with three tools:
1) Scenario generation at scale
Generative models can produce a large set of structured vignettes:
- Different threat postures and rules of engagement
- Communications degradations
- Deception and decoys
- Logistics failures
This is not about writing sci-fi. It’s about pressure testing assumptions you didn’t realize you had.
2) Wargame acceleration
Traditional wargaming is labor-intensive. AI can accelerate it by:
- Acting as red-team “sparring partners” for planners
- Rapidly updating probability estimates as conditions change
- Summarizing outcome drivers (what actually caused mission failure)
A useful output is a short list of operational sensitivities: “If datalinks degrade by X, mission success drops by Y.” Even without exact numbers, identifying the biggest sensitivities gives commanders a real planning advantage.
3) Mission rehearsal in degraded modes
In the weapons engagement zone, you’re not operating in ideal network conditions. AI systems must be exercised in:
- Disconnected operations (edge compute, delayed sync)
- Data-poor environments (limited sensor coverage)
- Deception-rich environments (spoofing, adversarial inputs)
If your AI can’t degrade gracefully, it’s a liability.
The Forge, additive manufacturing, and the “right to repair” problem
Koehler highlighted The Forge—Indo-Pacific Command’s expeditionary foundry—and then made a point that should make every program manager pay attention: sailors need the authority and confidence to install new parts and reconfigure systems without waiting on contractors.
That’s not a minor maintenance note. It’s a strategic requirement.
Why “right to repair” is a combat capability
In contested maritime operations, contractor access is uncertain, supply lines are stressed, and platforms may need rapid reconfiguration. If your force can’t repair and adapt equipment at the tactical edge, you’ve effectively accepted:
- Longer downtime
- Higher mission risk
- Reduced tempo
And if unmanned systems need reconfiguration mid-fight, the fleet can’t be stuck in a help-desk model.
How AI supports repair and reconfiguration
AI’s contribution here is less glamorous, but highly fieldable:
- Predictive maintenance models that prioritize what to fix first based on mission impact
- Automated troubleshooting using onboard logs and prior failure patterns
- Digital technical manuals with natural-language search and step-by-step guidance
- Configuration management assistants that validate changes and flag unsafe combinations
If you want rapid acquisition to work, you also need rapid sustainment. I’ve found that organizations that ignore sustainment early end up “innovating” themselves into fragile fleets.
What defense teams should copy from PacFleet (and what to avoid)
PacFleet’s posture is a blueprint for any organization trying to field AI-enabled defense capabilities quickly—government or industry. The trick is to copy the parts that create speed without creating chaos.
Adopt: a tight loop between operators, engineers, and acquirers
The fastest programs reduce the distance between the person who feels the problem and the person who can change the system.
Practical methods that work:
- Weekly operator feedback cycles tied to build releases
- Embedded test teams at exercises
- “Decision logs” that connect model outputs to commander actions
Adopt: measurable readiness metrics for AI
If AI is going to “enhance command control,” you need metrics that match operations, such as:
- Time-to-detect and time-to-classify (minutes, not vague success rates)
- False alarm rate per watch (operator burden matters)
- Performance under jamming/degradation (define thresholds)
- Explainability usability (can a watchstander understand the why quickly?)
Avoid: fielding models without governance
Fast deployment without governance produces three predictable failures:
- Model drift (performance degrades as conditions change)
- Data contamination (bad labels, adversarial manipulation, or inconsistent feeds)
- Trust collapse (operators ignore tools after a few high-profile misses)
A simple governance baseline is non-negotiable:
- Version control for models and data
- Human override always available
- Logging for after-action learning
- Clear responsibility for updates and rollback
People also ask: “Can AI really help the Navy outpace adversaries?”
Yes—when AI is used to shorten the observe–orient–decide–act loop and strengthen sustainment, not when it’s treated as a silver bullet. The advantage comes from compounding effects:
- Faster sensemaking leads to faster maneuver
- Faster maneuver forces adversary mistakes
- Better maintenance and repair increases operational availability
- Better availability increases presence and deterrence
The real contest isn’t “who has the smartest model.” It’s who can safely adapt faster in contact.
The next step: treat AI as part of force design, not an add-on
PacFleet’s urgency is a reminder that AI in defense and national security is no longer limited to back-office analytics. It’s moving into the operational bloodstream: command-and-control, unmanned system employment, maintenance, and concept development.
If you’re building AI for defense, here’s the question that matters most: Can your system still deliver value when networks are degraded, data is messy, and the operator is tired? If the answer is no, it’s a lab demo—not a fleet capability.
For teams supporting Indo-Pacific missions, the opportunity is clear: help commanders and sailors move faster without losing control, and help the fleet repair and reconfigure at the edge. That’s how “building the airplane while flying it” stays airborne.
Where should AI be trusted first in contested maritime operations—sensor fusion, course-of-action generation, or predictive maintenance—and what proof would you need before you’d stake a mission on it?