Pacific Fleet’s push for speed is making AI-enabled C2 and repairable autonomy essential. Learn what to build, measure, and field fast.

AI for Pacific Fleet: Speed, Resilience, Readiness
Adm. Steve Koehler didn’t try to make the moment sound comfortable. Describing Pacific Fleet’s push for new capabilities, he compared it to “building the airplane while we fly it”—and he meant it as a deliberate strategy, not a temporary inconvenience.
For leaders across defense, national security, and the industrial base, that framing is the real story. It signals a shift from platform-era modernization (long timelines, fixed requirements, slow upgrades) to continuous capability delivery—where software, data, and adaptable operating concepts matter as much as hulls, missiles, and aircraft.
This post is part of our AI in Defense & National Security series, and Pacific Fleet is a clean example of why AI isn’t a “nice to have” anymore. If you’re trying to win contracts, plan programs, or field systems in the Indo-Pacific, the question isn’t whether AI belongs in your solution. It’s whether your AI approach can survive real operations: contested networks, degraded sensors, rapid re-tasking, and sailors who need to modify systems without waiting for a contractor.
Pacific Fleet’s urgency is a signal to industry
Pacific Fleet’s message is simple: speed and persistence inside the weapons engagement zone is the bar, and the bar is rising. That means faster acquisition, faster iteration, and faster learning cycles during exercises—not after a program of record completes.
Here’s what I take from Koehler’s remarks: the fleet isn’t just buying tech; it’s trying to change its operating system. New concepts of operation (CONOPS) are being developed alongside experimentation and rehearsal. That’s a major cultural shift, because it treats CONOPS as a living product—tested, refined, and redeployed.
For vendors and integrators, that changes the pitch. PowerPoint roadmaps won’t carry the day. What matters is whether you can:
- Field capability in months, not years
- Update models and software safely and repeatedly
- Provide operational value even when data is messy or incomplete
- Support “operator-owned” configuration in the field
If you can’t do those things, you’re selling into yesterday’s procurement logic.
AI is the speed layer—if it’s built for commanders, not demos
Pacific Fleet is already using AI for data analysis, and Koehler signaled expansion into command and control, faster decision-making, and accelerating the action cycle between maneuver and fires.
That’s the right target. The wrong approach is building AI that looks impressive in a lab and collapses when the inputs get weird.
Where AI actually helps in maritime operations
AI’s most reliable contribution to operational speed is reducing “time-to-meaning” from incoming data. In a maritime theater, that often means:
- Multi-INT fusion support: correlating radar tracks, EO/IR detections, acoustic cues, AIS anomalies, and open-source context
- Track management and pattern-of-life: highlighting which contacts behave unlike their historical norms
- Workflow automation in the watchfloor: summarizing updates, generating route risk notes, proposing collection priorities
- Decision support for logistics and readiness: predicting failure modes and surfacing supply chain bottlenecks before ships deploy
Notice what’s missing: “AI makes the decision.” In real command environments, the practical win is AI that compresses the decision timeline while keeping humans confident in the why.
The standard Pacific Fleet is setting: faster decisions with fewer assumptions
In contested environments, the fleet can’t assume perfect comms, consistent cloud access, or clean sensor feeds. So AI has to be engineered for:
- Graceful degradation: reduced capability is acceptable; failure is not
- Edge operation: inference at the tactical edge with intermittent reach-back
- Clear provenance: what data contributed to an alert, and what didn’t
- Operator trust cues: confidence estimates, alternative hypotheses, and “what would change my assessment” prompts
If your AI requires pristine data and stable networks, it’s not an operational system—it’s a science project.
New operating concepts require AI-ready C2, not just more dashboards
Koehler’s emphasis on building new concepts of operation alongside capability development should make program offices and primes a little uneasy—in a good way.
A new CONOPS isn’t just a different playbook. It’s different information flow, different authority boundaries, and different timing.
“Embrace the red” means designing for friction
Navy leaders sometimes call it “embracing the red”: confronting the problems early, adjusting quickly, and fixing what breaks. Applied to AI-enabled C2, that means designing for the hard parts upfront:
- Classification barriers and cross-domain movement
- Coalition interoperability (different networks, different rules, different data standards)
- Model updates under accreditation constraints
- Auditability when an AI suggestion influences a kinetic decision
Most organizations get this wrong by treating governance as paperwork at the end. In operational AI, governance is part of the product.
A practical blueprint: AI-enabled decision cycles
If you’re building AI for maritime decision support, aim for a closed loop that commanders recognize:
- Sense: ingest, normalize, label, and timestamp data
- Understand: fuse tracks and detect anomalies
- Decide: propose options tied to commander’s intent and rules of engagement
- Act: generate machine-readable tasking where possible
- Assess: measure outcomes, capture feedback, and update tactics/model behavior
That last step—assess—is where most programs quietly fail. Without structured feedback, systems stagnate, trust erodes, and adoption becomes performative.
“Right to repair” is a hidden requirement for AI and autonomy
Koehler’s most operationally disruptive point wasn’t about AI at all. It was about ownership:
“We owe our warriors the right to repair and configure their own equipment.”
That’s a direct challenge to traditional contractor-centric sustainment models. And it has big implications for unmanned systems, autonomy stacks, and AI-enabled mission modules.
What “right to repair” means in an AI-enabled fleet
To support operator-led repair and reconfiguration, vendors should expect requirements like:
- Modular, swappable components with published interfaces
- On-platform diagnostics that don’t require proprietary tools
- Role-based access controls so sailors can reconfigure safely without opening security holes
- Field-friendly update packages (including rollback plans)
- Documentation that works under pressure, not just compliance checklists
If you’re selling autonomy or AI into the fleet, assume the operator will need to re-task it during a fight—under time pressure—and you won’t be there to help.
The Forge and expeditionary production: a clue about the future
Koehler pointed to Indo-Pacific Command’s expeditionary foundry, often discussed in the context of additive manufacturing and forward production. The strategic message is bigger than 3D printing: capability has to move closer to the fight.
AI fits that model when it’s:
- Packaged for deployment at the edge
- Maintainable by uniformed personnel
- Designed with resilient data pipelines
Expeditionary production plus AI-enabled adaptation is how you keep systems relevant when the adversary is changing tactics weekly.
What defense tech teams should build next (and how to sell it)
If you’re trying to align with Pacific Fleet’s direction, here’s what tends to resonate with operational stakeholders right now—because it matches the “build while flying” reality.
Build: mission-adaptable AI, not single-purpose models
Single models that do one thing well are useful, but fleets need mission packages that can be swapped and tuned:
- Detection + prioritization + recommended actions (not just detection)
- Tools for rapid labeling and feedback during exercises
- Model monitoring that flags drift and data quality issues
- Pre-approved “safe modes” for degraded conditions
Prove: time saved, decisions improved, operators satisfied
Operational buyers respond to metrics that map to tempo and readiness. Examples of measurable outcomes:
- Reduced time to produce a watch update or common operating picture refresh
- Increased contact correlation rate across sensors
- Reduced false alarms per watch
- Improved mission-capable rates through predictive maintenance
If you can’t quantify the benefit, you’ll lose to a competitor who can—even if your model is technically better.
Sell: a deployment pathway that respects accreditation and reality
Your go-to-market needs a credible path through:
- Authority to Operate (ATO) realities
- Classified/unclassified workflows
- Edge compute constraints
- Model update governance
A strong pitch sounds like: “Here’s how we field v1 in 90 days, learn in exercises, and improve every quarter without breaking compliance.”
What Pacific Fleet’s push means for 2026 planning
As budgets and priorities shift heading into 2026, Pacific Fleet’s posture helps explain what will keep getting funded: capability that increases operational speed, persistence, and resilience under pressure.
AI is central to that—but only when paired with the unglamorous parts: data engineering, human factors, edge deployment, and sustainment that empowers sailors instead of locking them out.
If you’re building AI for defense and national security, take Koehler’s “airplane while flying” line as a design requirement. The fleet doesn’t have the luxury of waiting for perfect systems. It needs systems that improve fast, fail safely, and can be repaired and reconfigured by the people who depend on them.
Where does this go next? My bet: the winners will be teams that treat AI as a continuous operational capability, not a one-time integration—because the Indo-Pacific problem set won’t wait for traditional timelines.