A practical middle way for AI in professional military education: build AI fluency without surrendering judgment through skepticism, transparency, and AI-free checkpoints.
Teach AI Without Losing Military Judgment
A capable staff officer can draft a clean memo in an hour. A large language model can produce something memo-shaped in 30 seconds. The dangerous part is that both outputs can look equally “professional” at first glance.
That’s why the real challenge in AI in defense & national security isn’t access to tools—it’s building leaders who can work with AI without handing over their judgment. Professional military education (PME) is where those habits form. If PME gets AI wrong, we won’t just graduate students who write faster; we’ll graduate decision-makers who are easier to mislead.
Matthew Woessner’s argument about a “middle way” for AI in the military classroom is the right frame: banning AI is fantasy, and permissive anything-goes policies quietly hollow out the skills we’re supposed to be sharpening. The better approach is structured collaboration—training students to use AI for speed and breadth while proving they can still reason, write, and brief without it.
The real risk isn’t cheating—it’s cognitive surrender
The core problem is simpler than most policy memos admit: AI produces plausible authority on demand, and that tempts smart people into intellectual autopilot.
In national security settings, that temptation is amplified. Students and practitioners often operate under time pressure, information overload, and strong incentives to “get to an answer.” AI tools appear to solve the hardest part—synthesis—by summarizing readings, proposing courses of action, and generating articulate justifications. But synthesis is exactly where judgment lives.
Woessner describes a pattern many instructors have now seen firsthand: students will argue with classmates but defer to a chatbot. That’s not because students are naive; it’s because LLMs speak with calm confidence, provide orderly structure, and rarely show their uncertainty in a way that humans naturally recognize.
Here’s a sentence that should be posted in every PME faculty lounge:
The biggest AI failure mode in education isn’t wrong answers; it’s unearned confidence in answers that sound right.
In defense and national security work—intelligence analysis, operational planning, targeting, cyber incident response—“sounds right” is not a quality standard.
A practical “middle way” for AI in professional military education
PME needs an AI policy that does two things at once:
- Build AI fluency (because graduates will use these tools in the real world)
- Protect foundational competence (because AI fluency without fundamentals creates dependence)
Woessner’s middle ground is essentially the same logic we’ve used for decades with calculators, navigation aids, and decision support systems: you can use the tool, but you must first prove you understand what it’s doing and what it can’t do.
The best PME programs will make AI part of the curriculum and build structured moments where AI is unavailable.
What “AI collaboration” should look like in a defense classroom
AI collaboration works when the machine is treated like an aggressively fast staff assistant:
- It can generate options, but it doesn’t choose
- It can summarize, but it doesn’t set relevance
- It can draft, but it doesn’t own the argument
- It can critique, but it doesn’t decide what to keep
In practice, that means assigning AI tasks that amplify learning instead of replacing it. A few examples that fit national security education:
- Red-team prompts: Ask the model to attack a student’s argument, then require the student to rebut with evidence from assigned readings.
- Assumption surfacing: Have AI list implicit assumptions in a proposed course of action; students must validate or reject each assumption.
- Alternative framing drills: Generate three competing problem statements for the same scenario (deterrence failure, logistics risk, alliance politics) and brief which one drives better decisions.
- Structured analytic technique practice: Use AI to produce an initial ACH matrix or indicators list, then have students correct it and document changes.
This approach aligns with how AI is increasingly used in defense operations: accelerating staff work while keeping humans accountable for the logic.
Principle 1: Train skepticism as a battlefield skill
Woessner’s first principle—students must understand AI’s fallibility—should be treated as mission-essential.
LLMs don’t “know” facts the way professionals use the term. They generate likely text based on patterns in training data and system instructions. That produces three predictable problems in security-relevant work:
- Hallucinated details (fabricated citations, invented unit names, fake quotes)
- Confident legal/ethical framing (presented as settled when it’s contested)
- Context collapse (missing the operational constraints that change the answer)
If you want future leaders who can use AI for intelligence analysis or operational planning, you need to train verification reflexes.
A simple classroom drill: the “three-point validation rule”
When a student uses AI output to support a claim, require three validations before it can enter discussion or writing:
- Primary source check: Is there an authoritative document, doctrine, or dataset that supports it?
- Second-model check: Does a different model/tool frame the issue the same way?
- Human logic check: Can the student explain the reasoning chain without reading the AI text?
That third check is where the learning happens.
Principle 2: Teach the “invisible hand” behind AI behavior
Woessner’s second principle is the one many AI policies avoid because it feels political: AI systems contain rules, guardrails, and built-in preferences—not only from training data, but from design decisions made by developers and deployers.
In national security education, this matters for two reasons:
- Models can steer analysis through framing, not facts. If the system repeatedly emphasizes some risks and downplays others, students absorb that weighting.
- Models can refuse or reshape topics in ways that silently distort a student’s work. If a tool declines to engage, rewrites sensitive context, or “sanitizes” language, the student’s final product may be less accurate.
The fix isn’t paranoia; it’s literacy.
Make bias and guardrails observable, not theoretical
If I were designing a PME module for 2026, I’d require students to run the same prompt through multiple approved systems and compare:
- What facts are included or omitted?
- Where does the tone shift from analysis to advocacy?
- Which terms trigger refusals or substitutions?
- Does the model treat similar scenarios differently depending on actor labels?
Then students brief their findings as if they were evaluating an intelligence source: reliability, consistency, and likely distortions.
Treat AI output like a source report, not a reference book.
That single habit travels well from the classroom to the J2 shop.
Principle 3: Build “AI-free checkpoints” that actually mean something
This is where many institutions get nervous. They want AI adoption and easy grading. You can’t have both.
If students can outsource reading, summarizing, drafting, and revision to AI for an entire term, the course becomes a participation trophy. The strongest argument for AI-free checkpoints is not punishment—it’s skill assurance.
In mission-critical environments, leaders must perform under degraded conditions:
- no connectivity
- compromised systems
- contested cyber terrain
- adversary manipulation of data feeds
If PME doesn’t validate human performance without AI, the institution is certifying a capability it didn’t measure.
What AI-free checkpoints can look like (without turning PME into 1994)
You don’t need to ban AI across the board. You need targeted evaluations where AI can’t help.
Effective options:
- Oral defenses of written work: Students submit a paper (AI permitted within policy), then face a 10–15 minute oral where they must explain and defend key claims.
- Blue-book scenario writes: Short, timed writes on operational problems using only provided references.
- Reading accountability drills: 5-minute “cold brief” rotations: summarize the author’s argument, then identify one flaw.
- In-class analytic reps: Students build an argument map, indicators list, or decision matrix on the board.
Oral exams are especially powerful, and Woessner is right to highlight them. They’re hard to fake, they reward genuine mastery, and they mirror how senior leaders are judged in the real world: can you explain your thinking under pressure?
How this connects to real defense AI adoption in 2025–2026
The timing matters. As of late 2025, defense organizations are pushing AI into:
- intelligence triage and summarization
- open-source intelligence workflows
- cyber alert correlation
- staff drafting and briefing support
- wargaming and course-of-action generation
That’s precisely why PME must train how to collaborate rather than how to comply. If the first time an officer learns to challenge AI is after a bad recommendation reaches a commander, the learning is too expensive.
Adversaries also have a vote here. Manipulated training data, poisoned sources, and prompt-injection techniques aren’t academic. The people least likely to resist those attacks are the ones who never had to operate without a machine and never practiced interrogating machine output.
A PME-ready AI policy you can actually implement
If you’re responsible for curriculum, standards, or faculty development, here’s a workable starting template:
- Declare allowed uses by category (brainstorming, outlining, editing, summarizing) and disallowed uses (unattributed drafting, fabricated citations, bypassing assigned readings).
- Require AI disclosure in an appendix: tool used, purpose, and the exact prompts for any substantive analytic assistance.
- Create two lanes of assessment:
- AI-enabled deliverables (where process and verification are graded)
- AI-free checkpoints (where core competence is graded)
- Teach verification as doctrine: three-point validation for any AI-supported claim.
- Measure outcomes: track writing quality, oral defense performance, and student confidence in challenging AI.
This isn’t about slowing modernization. It’s about ensuring modernization doesn’t produce leaders who can’t function when automation fails.
What leaders should demand from AI-ready graduates
PME graduates heading into an AI-saturated force should be able to do five things reliably:
- Explain an AI-assisted recommendation in plain language
- Audit sources, assumptions, and missing variables
- Challenge the model’s framing and propose alternatives
- Operate when AI tools are unavailable
- Detect when an AI system is steering, not supporting
If a program can’t demonstrate those outcomes, it’s not producing AI-ready leaders—it’s producing AI-dependent ones.
Next steps for teams building AI training in national security
If you’re building AI training for a command, schoolhouse, or defense organization, the fastest win is to pilot one course with:
- an AI collaboration assignment (red-team, assumption check, or alternative framing)
- a required AI disclosure appendix
- one AI-free oral defense
Then compare performance and student behavior. You’ll learn quickly whether the tool is strengthening thinking or replacing it.
The future of AI in defense & national security will belong to organizations that treat judgment as the scarce resource. PME is where that resource is grown—or quietly depleted. When AI gives your students a polished answer in 30 seconds, will they know what to do next?