AI-Ready Information Groups: Prove Value, Not Hype

AI in Defense & National Security••By 3L3C

Measure whether MEF Information Groups improve decision speed and effects. See how AI-enabled workflows make information operations testable and scalable.

Marine CorpsInformation OperationsCommand and ControlDefense AIForce DesignDecision Support
Share:

AI-Ready Information Groups: Prove Value, Not Hype

The Marine Corps has been running an experiment for roughly a decade: put most information-related capabilities under a single formation at each Marine expeditionary force (MEF) and see if it creates real operational advantage. The formation is the MEF Information Group (MIG). The argument around it has become familiar: one side says the MIG hasn’t delivered and should be closed; the other says it’s valuable but hamstrung by broader headquarters friction.

Here’s the problem with how this debate often plays out: it drifts toward slogans (“information matters”), org charts (“we have the people and tools”), or tech talismans (“we field Maven Smart System”). None of that answers the only question that should decide the MIG’s future: does this headquarters produce outcomes that are measurably better than distributing the same people across the MEF staff and subordinate units?

In this entry in our AI in Defense & National Security series, I’m going to push the discussion one step further: AI is the fastest way to make the MIG testable. Not because AI is magical, but because modern information operations and command and control (C2) are increasingly about speed, prioritization, and decision quality under overload—exactly the areas where AI-enabled workflows can be instrumented, measured, and improved.

The MIG debate is about stewardship and proof

The most useful framing is simple: everyone agrees information advantage matters; the dispute is whether the MIG is the best way to buy it. That distinction changes how you evaluate success.

A MIG can’t be justified by the existence of information warfare doctrine, or by the fact that it contains signals intelligence, communications, intelligence, and other information-related capabilities. Those are inputs. The MIG needs to prove it produces net-new effects that would not happen—at the same quality and speed—without the group headquarters.

Brian Kerg’s argument (and it’s a solid one) is essentially a call for intellectual hygiene:

  • Don’t treat critics of the MIG as critics of information operations.
  • Don’t confuse “we own the capabilities” with “we improved outcomes.”
  • Don’t use a program of record as a proxy for an operational concept.

If you care about readiness, budgets, or force design, that approach is the only responsible one. Scarce resources don’t care about good intentions.

Stop measuring the wrong things (especially “tools fielded”)

The wrong test is whether the MIG can field modern systems. By late 2025, AI-enabled tooling is becoming standard across the joint force. So “we have an AI platform” is table stakes, not a justification for a new echelon.

A better test is whether the MIG changes decisions and outcomes at the MEF level in ways a traditional staff construct can’t.

Three common traps that create heat but not light

  1. Doctrine-as-proof Doctrine is important, but doctrine doesn’t validate an organization. It describes intent; it doesn’t measure performance.

  2. Force-provider logic If the MIG mostly “chops” personnel to other units who then employ them in conventional staff lanes, the MIG is acting like an administrative container. That can be necessary, but it’s not an operational advantage.

  3. Tech-as-proof Pointing to an AI system (or any tool) often smuggles in an unspoken claim: “technology equals effect.” In practice, effects come from authorities, workflows, training, and decision rights—with technology as an accelerator, not a replacement.

This is where AI can actually help the debate: AI-enabled processes can be evaluated with clean metrics. Hand-wavy org arguments can’t.

The only test that matters: is the MIG greater than the sum of its parts?

A MIG is justified only if its headquarters makes subordinate information capabilities more effective together than apart. In practical terms, the MIG should raise the MEF’s ability to:

  • Sense faster (collect, fuse, and interpret signals)
  • Decide faster (prioritize, warn, recommend)
  • Act smarter (synchronize effects across the information environment)
  • Learn faster (iterate based on feedback)

The formation most associated with that “greater than the sum” claim is the Information Coordination Center (ICC)—a mechanism to plan, coordinate, and integrate information activities for the MEF commander.

So the evaluation should be blunt:

If you removed the ICC tomorrow, what measurable capability would the MEF lose—and could the MEF staff recreate it without adding personnel?

If the answer is “not much,” critics are right. If the answer is “we’d lose tempo and coherence in the information fight,” advocates need to prove it with data.

What “proof” looks like at MEF scale

To make this real, the MIG needs to define outcomes in operational terms, not staff activity terms.

Bad measures (activity):

  • number of briefs produced
  • number of collection requests submitted
  • number of “effects planned”

Good measures (outcome):

  • time from indicator to warning to the commander
  • percentage of high-priority targets/actors detected within a defined window
  • reduction in friendly signature exposure (communications discipline, emissions control compliance)
  • speed and accuracy of target development and deconfliction
  • adversary decision disruption indicators during exercises (measured via red cell logs)

These are measurable in training, wargames, and command post exercises—especially if you instrument the workflow.

Where AI actually fits: making information operations testable and scalable

AI in defense and national security is at its best when it reduces cognitive load, flags anomalies, and improves prioritization. That maps directly onto the MEF problem: the commander doesn’t need more data; the commander needs better decisions under time pressure.

AI-enabled C2: the MIG should own the workflow, not just the software

The MIG’s opportunity isn’t “having AI.” It’s building a MEF-level decision support pipeline that reliably turns messy inputs into operational recommendations.

A practical AI-enabled ICC workflow looks like this:

  1. Ingest: multi-source data (SIGINT, open sources, tactical reports, cyber indicators, maritime/air tracks)
  2. Triage: machine-assisted clustering, entity resolution, and anomaly detection to highlight what changed
  3. Prioritize: commander’s critical information requirements encoded as machine-readable filters + human review
  4. Recommend: explainable summaries with confidence, gaps, and suggested collection/actions
  5. Synchronize: align information activities (deception, OPSEC, EW coordination, messaging, cyber) to the scheme of maneuver
  6. Feedback: capture outcomes and retrain/adjust playbooks

If this sounds like “analytics,” good. Information operations at MEF level are increasingly an analytics-and-authorities problem.

What AI improves (and what it doesn’t)

AI can improve:

  • speed of triage and correlation
  • consistency of watchfloor decisions and handoffs
  • early warning through anomaly detection
  • staff bandwidth by automating routine parsing and reporting

AI does not fix:

  • unclear authorities and delegation
  • poor data governance
  • fragmented decision rights across staff sections
  • training gaps and undisciplined battle rhythms

That last point matters. If your workflow is broken, AI will help you break it faster.

Authorities are the hidden constraint—and AI can expose it

One critique of MEF-level information formations is that key authorities often sit higher (combatant command, joint task force). That’s real. But it cuts two ways:

  • If the MIG can’t execute effects, it should still be able to accelerate requests, package options, and reduce friction for the MEF commander.
  • If the MIG can execute some effects, it must prove it’s doing so in ways that materially support maneuver.

AI-enabled workflow instrumentation can make authorities problems obvious:

  • How long do requests sit before approval?
  • How many requests bounce due to formatting, classification, or missing justification?
  • What percent of requested effects arrive too late to matter?

When you measure those timelines, you stop arguing about personalities and start arguing about system design.

A concrete way to evaluate “authority friction”

During an exercise, track a set of time-stamped events:

  • T0: indicator detected (e.g., adversary radar activation pattern changes)
  • T1: staff validates and tags
  • T2: request drafted
  • T3: request approved at MEF
  • T4: submitted to higher / joint
  • T5: authority grants / denies
  • T6: effect delivered
  • T7: operational decision executed

If T6 routinely lands after T7, the MIG isn’t helping the commander win the time fight—even if the staff work is “correct.”

A field guide: how leaders should run the MIG evaluation in 2026

If I were advising a MEF commander heading into the 2026 training cycle, I’d recommend a deliberately skeptical evaluation plan. Not hostile—skeptical.

1) Define the MIG’s “unique product” in one sentence

If the product can be performed by any staff section without additional headcount, it’s not unique.

A strong example:

“The ICC produces prioritized, fused, and actionable information effects packages that shorten the commander’s decide-and-act timeline.”

2) Run an A/B test in exercises

Do this at least twice:

  • A condition: ICC runs the full information integration battle rhythm.
  • B condition: legacy staff construct runs it (G-2 fusion + G-3 + comm + fires) with the same scenario pressure.

Hold scenario difficulty constant. Compare timelines, decision quality, and red cell impact.

3) Make AI part of the test, not a procurement story

AI should be introduced as workflow support:

  • automated correlation and triage
  • recommended courses of action
  • predictive alerts on likely adversary moves

Then evaluate whether the ICC plus AI:

  • reduces time to warning
  • reduces staff workload hours per decision
  • improves hit rate on priority targets/actors

4) Tie resourcing to results

If the MIG can’t demonstrate outcome advantage, don’t “hope harder.” Rebalance:

  • shrink the headquarters
  • push talent to using units
  • keep only the functions that repeatedly prove value under stress

This is how you defend readiness while staying honest about opportunity cost.

What this means for AI in defense and national security

The MIG argument is a preview of the broader defense AI challenge: organizations want AI to justify structures, but AI is most valuable when it forces clarity about decisions, data, and accountability.

If the Marine Corps wants to “think first, adapt fast,” it should treat information formations like product teams:

  • define outputs
  • instrument performance
  • iterate based on measured outcomes

That’s also how you build credible acquisition and modernization stories. Senior leaders and Congress don’t need more promises; they need evidence.

The next year of debate shouldn’t be about whether information advantage matters. It does. The practical question is sharper: will the Marine Corps build AI-enabled information operations that measurably shorten the MEF’s decision cycle—and if so, which organization is accountable for that performance?

If you’re responsible for information warfare readiness, force design, or AI-enabled command and control, you can’t avoid that question. You can only decide whether you want to answer it with arguments—or with data.