The Armyâs Ivy Sting tests show what AI-ready command and control really demands: rapid iteration, strong data governance, and security built into the sprint cycle.

AI-Driven Command and Control: What Ivy Sting Proves
The U.S. Army is running a second field test of its next-generation command and control (C2) prototype just months after awarding an approximately $99.6M prototype contract. That pace is the headlineâbut the bigger story is what the Army is really testing: whether it can ship an AI-ready C2 system on a software cadence that matches modern conflict.
If you work in defense technology, national security, or government acquisition, this matters for a simple reason: C2 is where operational speed is either created or destroyed. Itâs the layer that turns intelligence into decisions, decisions into orders, and orders into coordinated actionâacross fires, maneuver, airspace, logistics, and coalition partners.
This post is part of our âAI in Defense & National Securityâ series, and Ivy Sting 2 is a clean case study in how AI-enabled decision support moves from slide decks to soldier feedback, with all the messy realitiesâcyber, governance, integration, and procurement cultureâshowing up fast.
Ivy Sting 2 is about time: speeding decisions, not dashboards
The clearest takeaway from Ivy Sting 2 is that the Army is trying to compress the time between plan and effects. The test at Fort Carson (run by the 4th Infantry Division) focuses on scenarios like deconflicting airspace before firing weaponsâa practical problem that sits at the intersection of fires, aviation, air defense, and risk management.
That scenario isnât random. Airspace deconfliction is one of those battlefield chores that can quietly dominate timelines:
- Sensors report activity, but not always in a consistent format.
- Units submit changes, but approvals lag.
- Fires windows open and close.
- Friendly air tracks and UAS operations crowd the same space.
A modern C2 system should reduce friction by making the right constraints visible and by recommending safe, coherent options. Thatâs exactly where AI belongs in C2: not as an âauto-commander,â but as a high-tempo assistant that narrows choices, flags conflicts, and keeps humans aligned.
AI in C2 works when itâs decision support, not decision replacement
Most organizations get this wrong. They chase âautonomyâ before theyâve solved basic coordination. In operational C2, the win is often:
- better prioritization,
- earlier detection of conflicts,
- faster dissemination of updates,
- fewer manual handoffs,
- clearer accountability for what changed and why.
If the prototype helps commanders and staffs act faster without losing confidence in the data, itâs doing its job.
The real shift: C2 built like software, tested like operations
The Army isnât just revamping a mission command UI. Itâs pushing a different development model: frequent drops, field feedback, and rapid iteration.
Thatâs a hard break from the traditional pattern:
- Requirements get âfinalized.â
- Vendors get locked.
- A big system gets delivered years later.
- The threat, the tech, and the operational concepts have already changed.
The alternative being attempted here looks closer to a software sprint cycle: build, field, learn, rebuildâthen repeat.
Why this matters for AI readiness
AI doesnât succeed in programs that treat models and integrations as static deliverables. AI systems require:
- continuous updates (models drift; data pipelines change),
- ongoing evaluation (performance varies by environment),
- clear governance (who can deploy what, to whom, and when),
- security changes (patching and hardening never stop).
A sprint-like cadence isnât just ânice.â For AI in national security, itâs the difference between an operational edge and a frozen artifact.
Prototype events are where acquisition meets reality
Field tests like Ivy Sting create a forcing function. They answer uncomfortable questions that paper requirements avoid:
- Can units actually use the workflow under time pressure?
- Does the system degrade gracefully when connectivity is contested?
- Are data permissions and sharing rules clear, or do they stop the fight?
- How quickly can a critical fix get shipped back to the field?
Those questions are more important than feature checklists. In my experience, the fastest route to âusableâ is repeated exposure to operators who donât have time to be polite about what breaks.
A composable ecosystem: partners, plug-ins, and the fight for data governance
One of the most telling details in the prototype approach is the emphasis on integrating commercial technologies from multiple partners rather than rebuilding everything from scratch.
The prototype ecosystem described includes capabilities like:
- network and communications software for resilient data movement,
- logistics awareness tools to improve sustainment visibility,
- AI integration layers to connect models and automation into workflows.
The stated goal is to keep the system open enough that new vendors and capabilities can be onboarded as technology improves.
âOpenâ C2 is harder than it sounds
Everybody says they want modular, composable architectures. Then reality shows up:
- Identity and access management: who can see what, and under which conditions?
- Data labeling and metadata: if feeds arenât described consistently, AI outputs become unreliable.
- Auditability: commanders need to know what changed, who changed it, and what the system recommended.
- Version control in the field: multiple units running different builds is a recipe for confusion.
This is why data governance came up early in the prototype narrative. Governance isnât bureaucracy; itâs the rulebook that prevents C2 from becoming an argument over whose spreadsheet is correct.
The best AI feature is often âshared truthâ
If you want a snippet-worthy line for your internal briefings, here it is:
In C2, the first step toward AI is agreeing on the dataânot the model.
AI decision support is only as good as the underlying data consistency, timeliness, and permissions. If the program creates a repeatable way to ingest, normalize, and distribute data across partners and units, itâs building the foundation for every future AI enhancement.
The cyber memo controversy is a feature, not a bug
Rapid delivery creates a predictable tension: security teams worry (often correctly) that speed becomes technical debt.
The program recently faced criticism after an internal memo surfaced alleging cybersecurity deficiencies in an early prototype configuration. The response from both government and industry emphasized that issues were addressedâand, more importantly, senior leaders signaled that how concerns get raised needs to change.
What this reveals about modern defense software
For AI-enabled C2, you canât separate cyber from delivery pace:
- Shipping fast without security creates operational risk.
- Over-indexing on paperwork slows fixes and keeps vulnerabilities in place longer.
- The right answer is a tight loop between operators, developers, and security engineers.
If the Army is serious about fielding AI-enabled command and control, the culture has to shift from âdocument grievancesâ to âresolve issues in working sessions.â Written memos arenât the enemyâsilence and slow remediation are.
A practical model that works: ship with guardrails
The most credible approach Iâve seen in mission systems combines:
- pre-approved secure reference architectures (so teams donât reinvent controls),
- continuous scanning and automated compliance evidence,
- role-based feature flags (so risky functionality can be disabled by policy),
- red-team style testing during exercises, not after them.
That turns cyber into an operational discipline rather than a last-minute gate.
What Ivy Sting signals for 2026: procurement, partners, and operational AI
The prototype contract covers roughly 11 months, with a follow-on award expected for the next phase. That structure matters. It creates a recurring opportunity to adjust direction based on field results rather than defending a single big-bang plan.
Here are the strategic signals Iâd watch going into 2026 planning cycles:
1) C2 programs will be judged by iteration speed
If the Army can run field events aligned to development sprints and actually deliver meaningful updates each cycle, it sets a new bar. Other programsâespecially those tied to joint all-domain command and control discussionsâwonât be able to justify multi-year gaps between operator feedback and fixes.
2) Vendor ecosystems will matter more than prime contractors
C2 is turning into a platform problem: who can onboard partners quickly, integrate data safely, and maintain interoperability under stress. Thatâs less about any one vendorâs product and more about the integration discipline and the contracting model.
3) AI in mission planning will become normalâbut only if trust is earned
The Army doesnât need an AI that âtakes command.â It needs AI that reliably:
- highlights conflicts (airspace, fires, logistics constraints),
- recommends options with clear assumptions,
- tracks changes and impacts over time,
- explains outputs in staff language, not data science language.
Trust comes from repeatable performance in exercises, not promises.
What defense leaders should do now (actionable checklist)
If youâre responsible for AI, mission systems, cyber, or acquisition, Ivy Sting points to a few concrete moves that pay off quickly.
- Define the decision loops youâre trying to compress. âFaster C2â is vague. âCut airspace deconfliction from 30 minutes to 10â is measurable.
- Treat data governance as an operational enabler. Make permissions, provenance, and update rules part of the product, not a separate policy binder.
- Bake cyber into the sprint rhythm. Require automated testing evidence every increment; donât wait for a quarterly review to discover known issues.
- Measure operator workload, not just system performance. If AI adds steps, it will be bypassed under stress.
- Plan for churn in âbest of breedâ tools. If the architecture canât swap components without a rewrite, itâs not actually composable.
Where AI-enabled command and control goes next
The Armyâs next-generation C2 prototype effort is testing more than software. Itâs testing whether the institution can build AI-ready command and control in a way that keeps pace with evolving threats, commercial innovation, and the realities of cyber risk.
For the broader AI in Defense & National Security landscape, Ivy Sting is a reminder that operational AI isnât primarily a model problem. Itâs a systems problem: data, workflows, governance, and disciplined iteration with soldiers in the loop.
If youâre building or buying C2 capabilities in 2026, the forward-looking question isnât âDoes it have AI?â Itâs this: Can it improve every month without becoming less secureâand can operators feel the difference in the time it takes to act?