Illiberal governance corrodes military professionalism—and undermines defense AI readiness. Learn what to modernize and how to build accountable AI systems.
Illiberal Politics Breaks Military Careers—and AI Readiness
A professional military doesn’t fail all at once. It fails in small, deniable steps: a politically timed firing here, a gagged inspector general there, a “temporary” rule-bending workaround that becomes standard practice. By the time the institution looks hollowed out, the damage is already baked into promotion boards, operational planning, and public trust.
Carrie Lee’s argument in The Soldier in the Illiberal State is a Professional Dead End lands because it’s not really about theory. It’s about incentives. In an illiberal—or backsliding—system, the safest career move for an officer is often the least ethical one: stay quiet, don’t document, don’t challenge, and don’t create friction for political leadership.
Here’s the twist that matters for the AI in Government & Public Sector conversation: the same conditions that corrode military professionalism also wreck AI readiness. Not because the force can’t buy software, but because AI depends on legitimacy, feedback, and truth-telling—exactly the things illiberal governance squeezes out.
Illiberal governance turns officership into a trap
Illiberal governance makes military careers a professional dead end because it forces officers into an unwinnable triangle: obeying lawful orders, maintaining professional ethics, and upholding constitutional principles. When the political system starts treating oversight, courts, and independent media as enemies, the officer’s “normal” operating environment becomes ethically unstable.
Lee pushes readers away from the comfortable thought experiment—what would a post-liberal military look like?—and toward the real question: at what point does continued service do more harm than good to the constitutional order an officer swore to support?
That question is uncomfortable for a reason. In a healthy system, most officers can spend an entire career without needing to test the boundaries of that oath. In a backsliding system, they’re asked to normalize things that used to be disqualifying.
The professional cost: legitimacy is the oxygen of the force
A profession isn’t just a job with uniforms and standards. It’s a social bargain: society grants the military special authority—especially the authority to use organized violence—because it trusts the institution to restrain itself through ethics, competence, and accountability.
When governance becomes illiberal, that bargain degrades fast:
- Ethics become optional when leadership rewards loyalty over candor.
- Accountability becomes performative when oversight is restricted.
- Public trust drops when the military is seen as a partisan tool.
And once legitimacy goes, recruitment, retention, and operational freedom go with it. You can’t “PR” your way out of that.
The hidden national security risk: a military that can’t tell the truth
A force that can’t speak honestly upward can still execute orders. It just can’t execute strategy.
Strategy needs a feedback loop: commanders report reality; civilians adjust goals; planners revise assumptions. Illiberal systems break that loop by punishing bad news.
This is where the national security implications get sharp. A military that’s incentivized to confirm political narratives rather than surface operational truth will:
- Misestimate adversary capabilities
- Underreport readiness gaps
- Hide logistics fragility
- Inflate success metrics
Those behaviors don’t just harm democracy—they create battlefield surprise. Modern warfare is already an environment of partial information and deception. A government that adds self-deception on top is choosing failure.
“Lawful but wrong” orders are the hard case
Lee is explicit that the dilemma isn’t only unlawful orders (those are clearer). The harder case is apparently lawful orders that still undermine the oath’s purpose—orders that normalize political repression, subvert oversight, or degrade the laws-of-armed-conflict culture.
A mature military ethic has to prepare for that gray zone.
Preparation beats improvisation because humans are vulnerable to:
- ethical fading (small compromises stop feeling like compromises)
- slippery slopes (yesterday’s exception becomes today’s norm)
- post hoc rationalization (explaining away what you already did)
This is why Lee’s emphasis on pre-identified red lines and scenario planning is practical, not academic.
AI readiness collapses without liberal “plumbing”
AI in defense is usually pitched as speed: faster targeting, faster logistics, faster analysis. That’s real—but incomplete. AI systems are only as reliable as the governance around them.
Illiberal governance attacks the “plumbing” AI needs to work:
- Data integrity: If reporting is politicized, training data gets contaminated.
- Evaluation honesty: If failures can’t be admitted, models can’t be improved.
- Oversight access: If auditors and inspectors are sidelined, risks accumulate.
- Operational learning: If lessons learned are censored, performance plateaus.
The result is a dangerous illusion: lots of AI tools deployed, dashboards everywhere, and declining decision quality.
AI can’t fix a legitimacy problem—but it can expose one
There’s a common mistake I see in public-sector AI programs: leaders treat AI as a substitute for trust, rather than a system that depends on trust.
AI can support professionalism when it’s used to raise the cost of lying and lower the cost of telling the truth, for example:
- Automated readiness and maintenance signals that make it harder to “cook” status reports
- Anomaly detection for inventory and supply chain diversion
- Audit trails for tasking, approvals, and changes to intelligence products
But in an illiberal environment, those same features can be repurposed for surveillance, retaliation, or narrative control. The tool isn’t the safeguard. The governance is.
Where AI genuinely helps: civil-military coordination under stress
AI can be a stabilizer in politically chaotic conditions—if it’s deployed with clear boundaries and independent oversight.
The most defensible use cases are the ones that:
- Reduce friction in nonpolitical functions (logistics, maintenance, staffing)
- Improve resilience (redundancy, forecasting, early warning)
- Increase transparency (auditable workflows, measurable service levels)
Predictive logistics and resource forecasting
In unstable environments—whether the instability is geopolitical or domestic—logistics becomes the first domain to show stress.
Applied well, AI-enabled logistics can:
- Forecast parts demand based on usage patterns and failure rates
- Optimize transport routes when access is contested
- Pre-position supplies with risk-weighted planning
This matters for civil-military coordination because it reduces the temptation to solve political problems with military visibility. When the military can keep operations steady without crisis improvisation, it’s harder for political actors to justify extraordinary measures.
Intelligence workflows that resist politicization
AI in intelligence analysis is a double-edged sword. It can accelerate triage and pattern recognition, but it can also accelerate groupthink if the training signals reward conformity.
Practical safeguards that support a professional force:
- Provenance metadata on key analytic judgments (what sources, what time window)
- Model cards and change logs for analytic tools (what changed, why)
- Red-team automation that searches for disconfirming evidence
When these are institutionalized, it becomes harder to quietly pressure analysts into rewriting reality.
Human capital is the real bottleneck in AI-integrated defense
Most defense AI conversations obsess over platforms. The harder constraint is people: the professional development pipeline that produces leaders who can supervise human-AI collaboration without breaking ethics or law.
Illiberal governance damages that pipeline directly:
- Promotions skew toward political safety over operational excellence
- Training shifts away from independent judgment toward compliance theater
- Officers learn that documenting concerns is career suicide
AI makes this worse if the force treats it as a way to deskill judgment.
What “AI-literate professionalism” looks like
In a healthy defense institution, AI literacy is a professional obligation, not a niche specialty. At minimum, leaders need to understand:
- What an AI system is optimizing for (and what it will ignore)
- How errors show up (false positives/negatives, distribution shift)
- When to slow down decisions rather than speed them up
- How to preserve human responsibility for lethal and coercive actions
A simple standard I like: If you can’t explain why the model could be wrong, you’re not ready to rely on it.
Practical steps defense leaders can take now
The point isn’t to turn officers into constitutional lawyers or data scientists. It’s to build institutional habits that protect professionalism under political stress while improving operational performance.
1) Pre-commit to ethical and legal red lines
Write them down. Discuss them with mentors and peers. Include scenarios that are politically plausible, not just tactically dramatic.
Examples of “hard but realistic” scenario types:
- Domestic deployments with ambiguous authorities
- Orders that restrict lawful transparency to oversight bodies
- Taskings that pressure intelligence outputs toward a preferred narrative
2) Build auditable AI, not mysterious AI
If a tool can’t be audited, it will be distrusted—or weaponized.
Institutional requirements should include:
- Access controls with logs
- Clear responsibility assignment for approvals and overrides
- Post-operation review processes that examine AI influence on decisions
3) Treat transparency as readiness, not PR
Reduced transparency isn’t “operational security.” It’s often just operational fragility with a curtain.
A readiness culture that supports both democracy and AI performance depends on:
- Inspectable metrics
- Independent evaluation teams
- Protected reporting channels
4) Modernize civil-military coordination through nonpartisan functions
The safest modernization wins are the ones that are plainly about performance: maintenance, staffing, logistics, compliance workflows. They improve outcomes and reduce political temperature.
That’s where many public-sector AI programs should start.
The real question isn’t whether the military can adapt—it’s what it becomes
Lee’s core warning is blunt: serving in an illiberal state isn’t just morally corrosive; it’s professionally self-defeating. The institution loses legitimacy, officers lose ethical footing, and national security loses the honest feedback loop strategy requires.
AI doesn’t rescue a force from that trajectory. But AI does raise the stakes. An illiberal system paired with powerful AI-enabled intelligence, logistics, and surveillance tools can harden dysfunction into a durable machine.
If you work in defense, government, or the public-sector AI ecosystem, the priority for 2026 isn’t simply “adopt more AI.” It’s build AI programs that strengthen professional ethics, oversight, and truth-telling—because those are operational requirements, not political preferences.
If the next crisis tests civil-military norms, will your AI systems help leaders see reality faster—or help them hide it longer?