AI, Information Warfare, and the MIG Test

AI in Defense & National Security••By 3L3C

A practical test for Marine Corps information groups—through the lens of AI-enabled information warfare and decision advantage.

AI in defenseinformation warfareMarine Corpscommand and controlinformation operationsdefense innovation
Share:

Featured image for AI, Information Warfare, and the MIG Test

AI, Information Warfare, and the MIG Test

A decade after standing up Marine expeditionary force information groups (MIGs), the Marine Corps is arguing about something deeper than an org chart. The real fight is over proof: what evidence should convince a commander that a formation designed to coordinate information effects is worth the people, money, and time it consumes?

That question lands differently in late 2025 than it did in 2015. The reason isn’t buzzwords—it’s operational reality. The information environment is now saturated with sensors, cheap drones, synthetic media, and automated cyber tooling. And the intelligence cycle is increasingly shaped by AI-enabled analytics that can compress hours of triage into minutes—if the unit has the structure and authorities to use it.

This post reframes the MIG debate as a case study for the broader AI in Defense & National Security series: integrating information capabilities (and now AI) isn’t mainly a technology problem. It’s a command-and-control design problem. If you want leads for AI-enabled defense work—strategy, integration, measurement, governance—this is the conversation your prospects are already having in uniform.

The MIG debate is really about organizational “proof of value”

The MIG debate isn’t about whether information matters. Marines across the force agree that information advantage supports maneuver—deception, surprise, tempo, and breaking an adversary’s cohesion. The dispute is narrower and more practical: Does a MIG create outcomes that wouldn’t happen otherwise?

That “otherwise” matters. Many of the same capabilities housed under MIGs existed in previous constructs (like the MEF headquarters group) and could be parceled out (“chopped”) to supported units. If a MIG ends up functioning primarily as a force provider—sending communicators to the S-6, intel Marines to the S-2, and so on—then critics can fairly ask whether the headquarters layer is mostly overhead.

Here’s the lens I’ve found most useful: intermediate headquarters are justified only when they produce coordination effects that exceed the sum of their subordinate parts. That’s true for a regiment. It’s true for an air control group. It’s true for an information group.

What not to use as evidence (and why it fails)

A lot of arguments sound persuasive but don’t actually test a MIG.

  • “Information is a warfighting function.” True, but irrelevant to whether a specific headquarters construct is necessary.
  • “There’s doctrine and literature supporting information operations.” Doctrine asserts intent; it doesn’t measure organizational performance.
  • “Look, the MIG has cool capabilities.” Capabilities can exist without proving the value of the integrating headquarters.
  • “We field AI tools like Maven Smart System.” Tools are increasingly enterprise-wide. Owning a license doesn’t prove the headquarters is producing better operational outcomes.

The hard truth: if you can’t describe what the MIG uniquely improves, you can’t defend it against resource trade-offs.

AI raises the stakes: speed without coordination becomes noise

AI changes the MIG question because it changes the tempo of information work.

AI-enabled intelligence analysis can:

  • triage ISR feeds faster,
  • detect patterns across multi-source data,
  • flag anomalies and likely deception,
  • accelerate target development and tracking,
  • support cyber defense with automated alerting and correlation.

But here’s the catch: faster analysis doesn’t automatically translate into better decisions. It can produce an “insight firehose” that overwhelms the very staff it’s supposed to help.

This is where the MIG concept—especially the information coordination center described in the source—becomes the right kind of battleground. The right test isn’t “do we have AI?” The right test is:

Does the MIG turn information and AI outputs into synchronized actions that measurably improve commander decisions and adversary dilemmas?

If the answer is yes, MIGs become the natural organizational home for scaling AI-enabled information warfare. If the answer is no, AI will simply make the inefficiencies faster.

A practical framing: three loops a MIG must improve

To evaluate an information group in an AI-shaped environment, evaluate whether it improves three loops:

  1. Sensemaking loop (What’s happening? What’s real? What’s deception?)
  2. Decision loop (What matters now? What’s the priority? Who acts?)
  3. Effects loop (Did our information action change adversary behavior or protect our freedom of action?)

AI can help the first loop a lot. The MIG has to prove it improves loops two and three.

What a “real test” of the MIG looks like

If you want a MIG debate that produces decisions instead of heat, you need agreed metrics and an evaluation plan. That plan should compare MIG-enabled integration against a credible alternative (for example, integration done by legacy staff structures).

1) Test whether the MIG is greater than the sum of its parts

This is the foundational test for any headquarters.

A MIG should be able to demonstrate outcomes like:

  • Faster and more accurate prioritization of collection and information activities across the MEF
  • Reduced duplication between intelligence fusion, communications planning, and information operations coordination
  • Higher operational tempo without higher staff burn (fewer “all hands” emergencies)
  • Better cross-domain synchronization (EW, cyber, deception, OPSEC, comms discipline, ISR tasking, influence activities)

A concrete way to test it during exercises:

  • Run two comparable command post iterations.
    • Iteration A: information integration led by the information coordination center.
    • Iteration B: information integration led by traditional G-staff constructs.
  • Use the same scenario injects: adversary deception, comms denial, cyber compromise, misinformation event, contested ISR.
  • Measure decision quality and speed, plus downstream effects.

2) Measure “decision advantage” with observable indicators

“Information advantage” is hard to quantify, but decision advantage can be instrumented.

Look for indicators you can actually count:

  • Time from detection to commander decision (minutes/hours)
  • Time from decision to task execution (minutes/hours)
  • Replans per 24 hours caused by preventable information gaps
  • Blue-force signature events (avoidable emissions, compromised positions, repeatable patterns)
  • Adversary ISR success rate in the scenario (how often they find, fix, track)

Then add a qualitative but disciplined assessment:

  • Were decisions made with clear confidence levels and assumptions?
  • Did the staff correctly identify deception or misinformation?
  • Did information activities support the scheme of maneuver—or run parallel to it?

3) Test the “authorities problem” as an operational constraint, not a complaint

A recurring critique is that MIGs have trained people but not the authorities to employ certain effects, which may sit at higher echelons.

Complaining about authorities is easy. Testing it is better.

A useful evaluation approach:

  • Map which information effects require higher-level approval.
  • Measure the latency from request to approval to execution.
  • Identify what MIG-resident experts do that changes the outcome:
    • better requests,
    • cleaner targeting packages,
    • fewer returns for clarification,
    • better timing and synchronization with maneuver.

If MIG expertise reduces approval latency and increases success rates, that’s value. If it doesn’t, the unit may be positioned at the wrong echelon—or needs a different delegation model.

Where AI fits inside a MIG without becoming a “tool chase”

Most organizations adopt AI the same way they buy gym memberships: optimism up front, very little change afterward.

If the Marine Corps wants MIGs (or their successors) to be AI-ready, the emphasis should be on process and governance, not demos.

Build an “AI-enabled information coordination center,” not an AI side project

An AI capability is operational only when it’s embedded in staff battle rhythm and authorities.

That means:

  • Data readiness: defined data owners, labeling/quality standards, retention policies, and release rules
  • Human workflows: who triages AI outputs, who validates, who briefs, who tasks
  • Model governance: how the unit handles model drift, adversary manipulation, and false positives
  • Red teaming: routine testing against deception, synthetic media, and spoofed signals

The MIG construct is a reasonable place to centralize these practices—if it’s already the place where information activities are integrated.

A simple rule for AI in information warfare

If an AI output doesn’t change a decision, a tasking, or a behavior, it’s just a dashboard.

That one sentence cuts through a lot of procurement theater.

Practical recommendations for leaders evaluating MIGs (or designing the next thing)

Whether you think MIGs should be optimized, redesigned, or replaced, the path forward looks similar.

  1. Define the “unique output” of the headquarters. If the answer is “providing people,” you don’t have a headquarters justification.
  2. Compare against a real alternative. Don’t compare MIG performance to a strawman. Compare it to a staffed, resourced legacy construct.
  3. Instrument the decision cycle. Track time, accuracy, replans, and signature events. If you can’t measure it, you can’t defend it.
  4. Treat authorities as a design variable. Either fix delegation pathways or move the capability to the echelon that can execute.
  5. Make AI serve the battle rhythm. AI belongs where decisions are made and synchronized—not where it’s easiest to install software.

These are also the same steps defense organizations should use when evaluating AI-enabled intelligence analysis, cyber fusion cells, and operational C2 modernization programs.

What this means for AI in Defense & National Security in 2026

The Marine Corps’ MIG debate is a preview of what every military and national security organization is dealing with: information capabilities are easy to buy and hard to operationalize. AI accelerates that tension.

If the MIG (through the information coordination center) proves it can translate AI-enabled sensing into coordinated actions and measurable effects, it becomes a model worth copying. If it can’t, then the Marine Corps will still need information advantage—just through a different structure.

If you’re building, buying, or integrating AI for defense organizations, the question to keep in front of you is simple: what organizational unit turns AI outputs into decisions and effects—and how will you prove it?

Want to sanity-check your own AI-in-C2 roadmap against measurable outcomes? The best place to start is your decision cycle—and the org design that owns it.

🇺🇸 AI, Information Warfare, and the MIG Test - United States | 3L3C