AI in Military Education: Train With It, Not Under It

AI in Defense & National Security••By 3L3C

A practical middle path for AI in military education: use AI to accelerate learning, but enforce AI-free mastery so leaders don’t surrender judgment.

Professional Military EducationHuman-Machine TeamingAI GovernanceDefense TrainingDecision-MakingNational Security Education
Share:

Featured image for AI in Military Education: Train With It, Not Under It

AI in Military Education: Train With It, Not Under It

A surprising pattern is showing up in professional military education (PME): students who routinely challenge each other will often treat AI output like a staff judge advocate’s memo—polished, authoritative, and basically “safe to trust.” That deference is the real risk.

AI in defense and national security isn’t a future concept anymore. It’s already embedded in intelligence analysis, cybersecurity triage, targeting support, logistics forecasting, and mission planning. So the military classroom has to adapt. But there’s a trap here: if PME teaches officers to use AI without teaching them how to resist it, we’re not building better decision-makers—we’re building better prompt operators.

Matthew Woessner’s argument lands because it avoids a false choice. The right approach isn’t banning AI, and it isn’t permissive “use it for everything.” The right approach is a middle path: train leaders to collaborate with AI while proving they can operate without it. That’s not academic purity. That’s national security readiness.

The real problem: AI makes weak thinking look strong

AI’s most dangerous trait in the classroom is plausibility. Large language models can produce fluent, confident answers that feel like expertise—even when they’re incomplete, biased, or wrong. In military contexts, that’s a direct analog to how bad intelligence can still be compelling if it’s well-presented.

Woessner describes a core failure mode: AI can project the illusion of objectivity while quietly embedding assumptions. In his example, an AI treated a contested interpretation of the Geneva Conventions as settled law, and only corrected course after sustained, expert pushback. That matters because most students won’t push back. Not because they’re lazy—because they’re busy, under time pressure, and the output looks “staff-ready.”

Here’s the uncomfortable truth I’ve seen across AI deployments in enterprises: people don’t outsource judgment because they’re irresponsible; they outsource it because the workflow rewards speed and confidence. PME is training people for environments where speed matters. That’s why PME must explicitly teach where AI confidence is earned—and where it’s counterfeit.

Why this matters for national security decision-making

PME doesn’t exist to produce great essays. It exists to produce leaders who can:

  • interpret ambiguous information under pressure
  • identify what’s missing, not just what’s present
  • challenge assumptions without derailing execution
  • write and brief with precision

Those are the same skills required to use AI effectively in intelligence analysis and mission planning. AI can help surface options. It can’t own the decision.

A leader who can’t argue against their AI’s recommendation isn’t collaborating with AI—they’re complying with it.

The middle path: teach AI collaboration while protecting fundamentals

The best AI policy for military education is “use AI, then prove you didn’t need it.” That’s the middle way: normalize AI as a tool while protecting the core skills that keep leaders from becoming dependent.

The calculator analogy is helpful, but PME has higher stakes. Calculators don’t shape your worldview. Language models can.

A practical middle-path model has three parts:

  1. Build skepticism through routine critique of AI outputs
  2. Reveal the invisible hand behind AI behavior and constraints
  3. Require AI-free competence checks throughout the program

Each part aligns directly with real-world operational risk: deception, bias, fragility under degraded comms, and adversarial manipulation.

Principle 1: Make AI fallibility a training objective, not a warning slide

Students should leave PME with a reflex: “interrogate the output.” Not occasionally. Every time it matters.

One-off lectures about hallucinations won’t do it. The learning has to be experiential and repeated—like weapons safety or operational risk management.

A classroom method that actually sticks: “Red Team the bot”

If you want students to stop treating AI like an oracle, force them to attack it.

Run a weekly short exercise:

  • Provide an AI-generated summary of a complex security issue (e.g., gray-zone coercion, escalation dynamics, alliance signaling)
  • Assign students roles:
    • Verifier: checks claims against assigned readings
    • Bias-hunter: identifies framing choices and normative assumptions
    • Omissions lead: lists critical missing context
    • Alternate model lead: compares responses across different AI tools (or different prompt strategies)

Grade the critique, not the summary.

This mirrors what good intelligence teams do: corroborate, contextualize, and test hypotheses. It also builds habits that transfer directly into AI-enabled intelligence analysis and cybersecurity workflows.

Bring adversarial pressure into the exercise

In defense contexts, we don’t just worry about random errors—we worry about adversarial influence.

PME should introduce students to the idea that AI systems can be:

  • manipulated through poisoned data
  • steered through system rules and hidden policies
  • exploited via prompt injection in connected tools

The point isn’t to turn every officer into a machine learning engineer. It’s to build instinctive caution: “If the output feels too clean, I should verify it harder.”

Principle 2: Teach the programmer’s invisible hand—because it shapes options

AI outputs aren’t neutral reflections of reality. They’re products of design choices. Those choices include safety rules, content moderation, preference tuning, and implicit values about what “harm” means.

Woessner highlights something instructors should not ignore: AI will sometimes shift from balanced analysis into advocacy, or refuse to engage at all—even when the user is asking for editing help or neutral analysis. In a military education context, that’s not just an annoyance. It can shape what students think is discussable.

Why this is a mission-planning issue, not a culture-war issue

When an AI tool steers analysis, the risk isn’t only political bias. The risk is option suppression.

In mission planning and national security strategy, commanders need:

  • a complete set of plausible courses of action
  • candid discussion of tradeoffs and second-order effects
  • clarity on legal and ethical constraints

If AI tools narrow the option space—whether by corporate policy, training data gaps, or adversary interference—then human planners must recognize it and compensate.

A practical PME exercise: “Same problem, different AI rules”

Give students the same scenario (for example, a maritime crisis escalation problem) and have them query:

  • an AI configured for maximum caution
  • an AI configured for creativity/brainstorming
  • an AI configured for policy compliance and risk avoidance

Then ask:

  • What options disappeared?
  • What assumptions changed?
  • Which version felt most “reasonable,” and why?

This teaches an operationally relevant point: AI is part analyst, part policy artifact.

Principle 3: Require AI-free competence checks—or dependence becomes the curriculum

If you never force independent performance, you won’t get it. That’s true in physical training, and it’s true in intellectual training.

The most useful policy shift PME can make is also the least glamorous: create AI-free checkpoints across the program.

Woessner points to oral comprehensive exams as a strong integrity and mastery safeguard. They work for a reason: they test understanding, not formatting.

What AI-free checkpoints should look like in 2026 PME

A workable model combines short, frequent checks with a few high-signal evaluations:

  • Closed-note concept checks (15–20 minutes): key doctrine, terms, causal relationships
  • Blue-book analytical writing: short argument with explicit evidence standards
  • Oral defenses: “Explain your logic, then respond to critique”
  • In-class wargame debriefs: articulate decisions under time pressure

These aren’t anti-technology. They’re pro-competence.

Pair AI-enabled assignments with accountability

PME should still assign AI-enabled work—because that’s the real world. The trick is pairing it with transparency and verification:

  • Require students to submit:
    • their prompts
    • the AI output
    • a short “confidence assessment” explaining what they verified
    • a list of what they chose not to use and why

This builds the habit we want in operational settings: auditability. When AI supports intelligence analysis or mission planning, leaders should be able to explain what they trusted, what they checked, and what they rejected.

What this means for AI readiness across defense and national security

The military classroom is a preview of the force. If PME gets AI integration right, it produces leaders who can adopt AI across domains without becoming brittle:

  • Intelligence analysis: AI accelerates pattern-finding, but humans own sourcing standards and analytic tradecraft.
  • Cybersecurity: AI helps triage alerts, but humans verify incidents and prevent automation from amplifying false positives.
  • Mission planning: AI can generate courses of action, but humans test feasibility, legality, and strategic risk.
  • Autonomous systems: AI can optimize behaviors, but humans define intent and constraints.

A force trained to question AI will outperform a force trained to obey it.

And there’s a deterrence angle. Adversaries don’t need to “beat” US AI to benefit from it. They just need to shape, spoof, or saturate the inputs so AI-enabled teams move faster in the wrong direction.

A practical implementation checklist for PME leaders

If you’re designing or updating a PME program now, these are the moves that matter most:

  1. Write an AI use policy that’s behavior-based (verification, disclosure, audit trails), not tool-based.
  2. Teach AI failure modes early, then reinforce them weekly with critique drills.
  3. Institutionalize AI-free assessments so independent reasoning remains mandatory.
  4. Grade the reasoning chain, not the polish of the product.
  5. Standardize an “AI collaboration memo” format for major assignments: prompts, outputs, verification steps, confidence level.
  6. Run cross-model comparisons to expose how “invisible hand” constraints shape answers.
  7. Treat adversarial manipulation as a core lesson, not an elective topic.

These steps create officers who can use AI for speed while keeping human judgment in command.

Where this series goes next

This post sits squarely in our AI in Defense & National Security series for a reason: classroom norms become operational norms. The same habits that prevent a student from turning in an AI-written paper will prevent a staff from briefing an AI-generated assessment that hasn’t been stress-tested.

AI will keep improving. Interfaces will get easier. Outputs will sound more confident. That’s exactly why PME must build leaders who can say, “Show me your reasoning, and show me what you might be missing.”

If you’re building AI capabilities for defense training, intelligence analysis, cybersecurity, or mission planning, the next step isn’t just adopting tools—it’s adopting governance, evaluation methods, and human-in-the-loop workflows that hold up under pressure.

What would change in your organization if every AI-assisted recommendation had to survive a five-minute oral cross-examination?

🇺🇸 AI in Military Education: Train With It, Not Under It - United States | 3L3C