AI in defense will shape officer identity and civilian control. Build decision provenance and governance so AI strengthens accountability, not politicization.
AI, Ethics, and the Post-Liberal Military Officer
A modern U.S. officer can spend a career “doing everything right” and still watch strategy fail. That’s not just a war-planning problem. It’s a legitimacy problem. When trust in institutions drops, politics gets sharper, and the public narrative about what the military is for fractures, the officer corps ends up carrying more than missions—it carries the weight of a contested civic order.
Peter Mitchell’s argument in The American Military Officer After Liberalism lands at an uncomfortable but necessary point: military professionalism is not timeless. It mirrors the political order that authorizes it. If liberal norms stop anchoring U.S. civil-military relations, the profession won’t stay frozen in the Huntington model of “neutral expert + objective civilian control.” It will adapt.
This matters for our AI in Government & Public Sector series because AI isn’t arriving in a stable institutional environment. It’s arriving in one where “neutrality” is contested, where bureaucratic process often substitutes for moral clarity, and where the military’s legitimacy bargain with society is under visible strain. AI can either accelerate that drift—or help rebuild a sturdier ethic of decision-making.
The real risk isn’t AI taking over—it’s process replacing judgment
The core risk is a military culture that equates professionalism with technical management. When “good order” becomes synonymous with metrics, staffing products, and compliance, you get an officer who is great at process and weaker at moral and strategic judgment.
Mitchell describes a shift from Huntington’s full triad—expertise, responsibility, corporateness—toward something thinner: expertise alone. The modern archetype becomes the systems manager: excellent at complexity, optimized for bureaucratic incentives, and increasingly evaluated by internal process performance rather than outcomes that the public can recognize as success.
AI can intensify this failure mode:
- If AI is adopted primarily as a speed tool (faster targeting cycles, faster staff work, faster intel fusion), it can reward the appearance of competence without improving strategic ends.
- If AI is treated as a neutrality shield (“the model recommended it”), it can launder accountability in exactly the way civil-military trust can’t afford.
- If AI outputs become promotion currency (“who can brief the best dashboard”), it can push officers further into technocracy and away from civic responsibility.
A sentence worth keeping on a sticky note: automation doesn’t remove politics; it changes where politics hides.
A practical definition: “ethical AI for defense” is governance, not vibes
In national security settings, ethical AI isn’t a poster on the wall. It’s the concrete set of controls that answers:
- Who is responsible for a decision when AI contributes?
- What evidence supports the recommendation?
- What constraints limit the system’s use?
- What oversight detects drift, bias, or misuse?
If your program can’t answer those four, it’s not “AI-enabled decision support.” It’s a reputational and operational liability.
Post-liberal futures change what “professionalism” even means
Mitchell’s strongest move is treating “post-liberalism” as a family of outcomes, not one ideology. He sketches several models—patrimonial, mercenary, Heinleian, neo-Prussian, chivalric—that could shape an officer corps if liberal norms weaken.
You don’t have to accept the labels to accept the mechanism: as civic consensus fragments, control tends to become more subjective. Not “follow the law and stay apolitical,” but “align with the prevailing cultural narrative, or you’re unprofessional.”
That’s where AI becomes a strategic variable. Different political orders will demand different kinds of AI:
- A loyalty-centered order will want AI for monitoring, vetting, and ideological enforcement.
- A market-outsourcing order will want AI for contract optimization, mission pricing, and privatized ISR.
- A service-citizenship order will want AI for screening, credentialing, and societal sorting.
So the question isn’t “Will AI be used in defense?” It already is. The question is which theory of legitimacy AI will reinforce.
Why the Huntington baseline doesn’t map cleanly to AI-era command
Huntington’s “objective control” assumes clean separations: civilians decide policy, military executes. AI blurs that separation by changing who can shape policy through technical detail.
In practice, the most influential actor often becomes the one who controls:
- the data,
- the model assumptions,
- the definitions (what counts as “threat,” “success,” or “risk”),
- and the briefing narrative.
That can strengthen civilian oversight if civilians can interrogate those inputs. It can also undermine oversight if “the algorithm” becomes an expertise moat that civilians can’t cross.
AI can support legitimacy—if you build it to defend accountability
The right goal for AI in national security isn’t faster decisions. It’s clearer responsibility. Speed matters, but legitimacy collapses when nobody can explain, defend, or audit what happened.
Here are three AI design patterns that actually support civil-military trust.
1) Decision provenance: make recommendations traceable
If your system can’t show why it recommended a course of action, you’re building a black box that invites politicization.
What works:
- Evidence trails: which sources mattered, which signals were weighted, and what uncertainty was present.
- Assumption registers: explicit model assumptions (e.g., expected adversary behavior) captured as artifacts, not implicit tribal knowledge.
- Counterfactual views: “If we assume X is wrong, does the recommendation change?”
This is how you keep AI from becoming a rhetorical weapon in staff fights.
2) Human-in-command is meaningless without “human-with-reasons”
Many programs claim “human-in-the-loop” as an ethical safeguard. In reality, humans can become rubber stamps when:
- tempo is high,
- the model output looks authoritative,
- and career risk punishes dissent.
A better standard is human-with-reasons:
- The approving commander must record why they accepted or rejected the AI recommendation.
- The record is reviewable.
- The organization learns from patterns of acceptance/rejection.
That’s not bureaucratic bloat. That’s a legitimacy shield.
3) Model governance that matches the chain of command
AI governance fails when it’s bolted on as a compliance checklist. In defense settings, governance has to map to command realities.
Minimum viable governance for AI in defense operations:
- Operational owner (mission impact)
- Technical owner (model performance)
- Legal/ethics owner (policy constraints)
- Independent evaluator (red team / test authority)
If one person “owns” all of these, you don’t have governance—you have a single point of failure.
Snippet-worthy truth: If your AI can’t be audited, it can’t be trusted in a legitimacy crisis.
The officer identity question AI can’t avoid
AI adoption forces an identity decision: are officers becoming better stewards of violence on behalf of a constitutional order—or better managers of institutional machinery?
Mitchell worries that the “postmodern officer” becomes untethered from a clear telos (an end or purpose), and that strategy degrades into technique. I agree with the diagnosis, and I’d add a modern accelerant: AI makes technique look like wisdom.
A dashboard can be perfectly up to date and still reflect a bad theory of the war. A model can be statistically strong and still optimize toward the wrong political objective.
Here’s what I’ve found when advising AI adoption in public sector environments: the programs that succeed don’t start with models. They start with decision rights and accountability.
“People Also Ask” (and the answers leaders actually need)
Does AI reduce politicization in the military? AI reduces politicization only when it increases transparency, auditability, and shared understanding of assumptions. Otherwise, it becomes a new arena for political contestation.
Can ethical AI help civilian control of the military? Yes—when civilians can interrogate AI inputs and when accountability is explicit. Ethical AI frameworks that don’t include decision provenance and independent evaluation are mostly theater.
What should defense leaders prioritize first: better models or better governance? Better governance. Great models inside weak governance structures accelerate failure faster.
A practical playbook for defense AI that reinforces national security values
If you’re building or buying AI for mission planning, intelligence analysis, or operational decision support, use this short checklist. It’s designed for leaders who want capability and legitimacy.
- Define the decision, not the data. Name the decision you’re improving (target nomination, route planning, force posture, collection prioritization).
- Write the “no-go” rules early. Specify prohibited uses, escalation requirements, and when AI must be ignored.
- Build for contested truth. Assume data will be incomplete, adversary-deceptive, and politically contested.
- Instrument accountability. Capture rationale, dissent, overrides, and outcomes as structured artifacts.
- Red-team the incentives. Ask: “Who benefits if the model is believed? Who benefits if it’s discredited?”
- Train narrative competence. Officers must explain AI outputs in plain language to civilians. If you can’t brief it cleanly, you can’t govern it.
This is where AI in government becomes more than technology procurement. It becomes institutional design.
Where this goes next for AI in Government & Public Sector
The military profession has always depended on a bargain: society grants it extraordinary authority, and the military demonstrates competence, restraint, and obedience to lawful civilian control. That bargain is harder to sustain when politics fragments and technology obscures accountability.
AI can help stabilize the bargain—but only if defense and national security leaders treat AI systems as governed instruments of state power, not just productivity tools.
If your team is evaluating AI for intelligence, mission planning, or decision support, the fastest way to reduce risk is to get serious about provenance, auditing, and command-aligned governance. Capability is table stakes. Legitimacy is the advantage.
Where do you think the next pressure point will be: AI in targeting, AI in personnel systems, or AI in domestic information environments?