AI-Driven Command and Control: What Ivy Sting Proves

AI in Defense & National Security••By 3L3C

The Army’s Ivy Sting tests show what AI-ready command and control really demands: rapid iteration, strong data governance, and security built into the sprint cycle.

armycommand-and-controlmission-commanddefense-aidefense-acquisitioncybersecuritydata-governance
Share:

Featured image for AI-Driven Command and Control: What Ivy Sting Proves

AI-Driven Command and Control: What Ivy Sting Proves

The U.S. Army is running a second field test of its next-generation command and control (C2) prototype just months after awarding an approximately $99.6M prototype contract. That pace is the headline—but the bigger story is what the Army is really testing: whether it can ship an AI-ready C2 system on a software cadence that matches modern conflict.

If you work in defense technology, national security, or government acquisition, this matters for a simple reason: C2 is where operational speed is either created or destroyed. It’s the layer that turns intelligence into decisions, decisions into orders, and orders into coordinated action—across fires, maneuver, airspace, logistics, and coalition partners.

This post is part of our “AI in Defense & National Security” series, and Ivy Sting 2 is a clean case study in how AI-enabled decision support moves from slide decks to soldier feedback, with all the messy realities—cyber, governance, integration, and procurement culture—showing up fast.

Ivy Sting 2 is about time: speeding decisions, not dashboards

The clearest takeaway from Ivy Sting 2 is that the Army is trying to compress the time between plan and effects. The test at Fort Carson (run by the 4th Infantry Division) focuses on scenarios like deconflicting airspace before firing weapons—a practical problem that sits at the intersection of fires, aviation, air defense, and risk management.

That scenario isn’t random. Airspace deconfliction is one of those battlefield chores that can quietly dominate timelines:

  • Sensors report activity, but not always in a consistent format.
  • Units submit changes, but approvals lag.
  • Fires windows open and close.
  • Friendly air tracks and UAS operations crowd the same space.

A modern C2 system should reduce friction by making the right constraints visible and by recommending safe, coherent options. That’s exactly where AI belongs in C2: not as an “auto-commander,” but as a high-tempo assistant that narrows choices, flags conflicts, and keeps humans aligned.

AI in C2 works when it’s decision support, not decision replacement

Most organizations get this wrong. They chase “autonomy” before they’ve solved basic coordination. In operational C2, the win is often:

  • better prioritization,
  • earlier detection of conflicts,
  • faster dissemination of updates,
  • fewer manual handoffs,
  • clearer accountability for what changed and why.

If the prototype helps commanders and staffs act faster without losing confidence in the data, it’s doing its job.

The real shift: C2 built like software, tested like operations

The Army isn’t just revamping a mission command UI. It’s pushing a different development model: frequent drops, field feedback, and rapid iteration.

That’s a hard break from the traditional pattern:

  1. Requirements get “finalized.”
  2. Vendors get locked.
  3. A big system gets delivered years later.
  4. The threat, the tech, and the operational concepts have already changed.

The alternative being attempted here looks closer to a software sprint cycle: build, field, learn, rebuild—then repeat.

Why this matters for AI readiness

AI doesn’t succeed in programs that treat models and integrations as static deliverables. AI systems require:

  • continuous updates (models drift; data pipelines change),
  • ongoing evaluation (performance varies by environment),
  • clear governance (who can deploy what, to whom, and when),
  • security changes (patching and hardening never stop).

A sprint-like cadence isn’t just “nice.” For AI in national security, it’s the difference between an operational edge and a frozen artifact.

Prototype events are where acquisition meets reality

Field tests like Ivy Sting create a forcing function. They answer uncomfortable questions that paper requirements avoid:

  • Can units actually use the workflow under time pressure?
  • Does the system degrade gracefully when connectivity is contested?
  • Are data permissions and sharing rules clear, or do they stop the fight?
  • How quickly can a critical fix get shipped back to the field?

Those questions are more important than feature checklists. In my experience, the fastest route to “usable” is repeated exposure to operators who don’t have time to be polite about what breaks.

A composable ecosystem: partners, plug-ins, and the fight for data governance

One of the most telling details in the prototype approach is the emphasis on integrating commercial technologies from multiple partners rather than rebuilding everything from scratch.

The prototype ecosystem described includes capabilities like:

  • network and communications software for resilient data movement,
  • logistics awareness tools to improve sustainment visibility,
  • AI integration layers to connect models and automation into workflows.

The stated goal is to keep the system open enough that new vendors and capabilities can be onboarded as technology improves.

“Open” C2 is harder than it sounds

Everybody says they want modular, composable architectures. Then reality shows up:

  • Identity and access management: who can see what, and under which conditions?
  • Data labeling and metadata: if feeds aren’t described consistently, AI outputs become unreliable.
  • Auditability: commanders need to know what changed, who changed it, and what the system recommended.
  • Version control in the field: multiple units running different builds is a recipe for confusion.

This is why data governance came up early in the prototype narrative. Governance isn’t bureaucracy; it’s the rulebook that prevents C2 from becoming an argument over whose spreadsheet is correct.

The best AI feature is often “shared truth”

If you want a snippet-worthy line for your internal briefings, here it is:

In C2, the first step toward AI is agreeing on the data—not the model.

AI decision support is only as good as the underlying data consistency, timeliness, and permissions. If the program creates a repeatable way to ingest, normalize, and distribute data across partners and units, it’s building the foundation for every future AI enhancement.

The cyber memo controversy is a feature, not a bug

Rapid delivery creates a predictable tension: security teams worry (often correctly) that speed becomes technical debt.

The program recently faced criticism after an internal memo surfaced alleging cybersecurity deficiencies in an early prototype configuration. The response from both government and industry emphasized that issues were addressed—and, more importantly, senior leaders signaled that how concerns get raised needs to change.

What this reveals about modern defense software

For AI-enabled C2, you can’t separate cyber from delivery pace:

  • Shipping fast without security creates operational risk.
  • Over-indexing on paperwork slows fixes and keeps vulnerabilities in place longer.
  • The right answer is a tight loop between operators, developers, and security engineers.

If the Army is serious about fielding AI-enabled command and control, the culture has to shift from “document grievances” to “resolve issues in working sessions.” Written memos aren’t the enemy—silence and slow remediation are.

A practical model that works: ship with guardrails

The most credible approach I’ve seen in mission systems combines:

  • pre-approved secure reference architectures (so teams don’t reinvent controls),
  • continuous scanning and automated compliance evidence,
  • role-based feature flags (so risky functionality can be disabled by policy),
  • red-team style testing during exercises, not after them.

That turns cyber into an operational discipline rather than a last-minute gate.

What Ivy Sting signals for 2026: procurement, partners, and operational AI

The prototype contract covers roughly 11 months, with a follow-on award expected for the next phase. That structure matters. It creates a recurring opportunity to adjust direction based on field results rather than defending a single big-bang plan.

Here are the strategic signals I’d watch going into 2026 planning cycles:

1) C2 programs will be judged by iteration speed

If the Army can run field events aligned to development sprints and actually deliver meaningful updates each cycle, it sets a new bar. Other programs—especially those tied to joint all-domain command and control discussions—won’t be able to justify multi-year gaps between operator feedback and fixes.

2) Vendor ecosystems will matter more than prime contractors

C2 is turning into a platform problem: who can onboard partners quickly, integrate data safely, and maintain interoperability under stress. That’s less about any one vendor’s product and more about the integration discipline and the contracting model.

3) AI in mission planning will become normal—but only if trust is earned

The Army doesn’t need an AI that “takes command.” It needs AI that reliably:

  • highlights conflicts (airspace, fires, logistics constraints),
  • recommends options with clear assumptions,
  • tracks changes and impacts over time,
  • explains outputs in staff language, not data science language.

Trust comes from repeatable performance in exercises, not promises.

What defense leaders should do now (actionable checklist)

If you’re responsible for AI, mission systems, cyber, or acquisition, Ivy Sting points to a few concrete moves that pay off quickly.

  1. Define the decision loops you’re trying to compress. “Faster C2” is vague. “Cut airspace deconfliction from 30 minutes to 10” is measurable.
  2. Treat data governance as an operational enabler. Make permissions, provenance, and update rules part of the product, not a separate policy binder.
  3. Bake cyber into the sprint rhythm. Require automated testing evidence every increment; don’t wait for a quarterly review to discover known issues.
  4. Measure operator workload, not just system performance. If AI adds steps, it will be bypassed under stress.
  5. Plan for churn in ‘best of breed’ tools. If the architecture can’t swap components without a rewrite, it’s not actually composable.

Where AI-enabled command and control goes next

The Army’s next-generation C2 prototype effort is testing more than software. It’s testing whether the institution can build AI-ready command and control in a way that keeps pace with evolving threats, commercial innovation, and the realities of cyber risk.

For the broader AI in Defense & National Security landscape, Ivy Sting is a reminder that operational AI isn’t primarily a model problem. It’s a systems problem: data, workflows, governance, and disciplined iteration with soldiers in the loop.

If you’re building or buying C2 capabilities in 2026, the forward-looking question isn’t “Does it have AI?” It’s this: Can it improve every month without becoming less secure—and can operators feel the difference in the time it takes to act?