Holiday Reading for AI-Era Defense Leaders

AI in Government & Public Sector••By 3L3C

Use the 2025 defense reading list to build smarter AI strategy: better command and control, maritime ops, and intel analysis with real accountability.

AI in defenseNational securityPublic sector AIMilitary innovationCommand and controlMaritime security
Share:

Featured image for Holiday Reading for AI-Era Defense Leaders

Holiday Reading for AI-Era Defense Leaders

A lot of AI in national security talk still sounds like science projects: impressive demos, thin operating concepts, and a vague promise that “more data” equals “better decisions.” Most organizations get this wrong.

History is the fastest way I know to puncture hype without sliding into cynicism. The 2025 War on the Rocks holiday reading list is packed with exactly the kind of material that helps defense teams, public-sector executives, and security practitioners think clearly about AI-enabled command and control, maritime operations, intelligence analysis, and force design.

This post turns that reading list into a practical map. Not a “read these 20 books” assignment—more like a set of lenses you can use to make better choices about AI in defense, AI in government, and the institutions that will have to run these systems under pressure.

What history teaches AI programs (that slide decks don’t)

History doesn’t tell you which model to buy. It tells you what breaks when real people, real incentives, and real uncertainty meet new tools.

Across the list, one pattern repeats: technology changes war only when organizations change with it. That’s as true for diesel engines and naval gunnery as it is for machine learning, autonomy, and decision-support systems.

Three practical implications follow for leaders building AI programs in the public sector:

  1. AI is an organizational reform project wearing a software costume. Data pipelines, authorities, training, and accountability matter as much as accuracy metrics.
  2. Decision speed is not the same as decision quality. Faster recommendations can amplify flawed assumptions.
  3. Legitimacy is a capability. If operators, oversight bodies, and the public don’t trust the system, the system won’t survive first contact with controversy.

If your AI roadmap doesn’t include doctrine, governance, and human skills, you’re building a very expensive sandbox.

From Annapolis to AI: naval history lessons for maritime operations

Sea power has always been a data problem—navigation, targeting, logistics, weather, adversary intent. The reading list’s naval titles are a quiet gift to anyone working on AI for maritime operations today.

Officer formation matters more than most AI teams admit

Craig Symonds’ Annapolis Goes to War uses a micro-history approach: one Naval Academy class, one global war, countless moments where preparation met chaos. The modern parallel is obvious: you can buy tools, but you can’t buy judgment.

AI-enabled command and control tends to assume the “user” is stable: trained, calm, and operating inside known procedures. War is the opposite. Officer education—habits of skepticism, probabilistic thinking, and the courage to question confident outputs—becomes part of the AI safety stack.

Actionable use:

  • Build “AI literacy” into professional military education the same way you build navigation or fires competence.
  • Teach operators how systems fail: data drift, adversarial manipulation, automation bias, and brittle optimization.

The Pacific view is a reminder: context beats templates

Thomas Jamison’s The Pacific’s New Navies reframes U.S. naval development through Pacific conflicts and perspectives. That matters in 2025 because many AI deployments fail by copying a template from one theater, mission, or classification environment into another.

AI readiness is not a generic maturity score; it’s mission- and geography-specific.

Actionable use:

  • Evaluate AI systems against regional realities: comms-denied operations, coalition data sharing limits, and culturally specific patterns of behavior.
  • Treat multilingual, multi-partner data as a first-order requirement, not a “phase two.”

Logistics is the hidden AI killer app—and the hidden risk

Several books on the list (including those focused on industrial capacity and mobilization) reinforce an old truth: armies don’t move on intent; they move on supply.

AI can improve maritime logistics—predictive maintenance, routing, spares positioning—but only if you solve governance problems:

  • Who owns the data when it’s contractor-generated?
  • Who is accountable when an algorithm’s recommendation increases risk?
  • What happens when a model learns the “wrong lesson” from peacetime demand?

A useful rule: if you can’t explain who answers for the recommendation, you’re not ready to operationalize it.

Leadership under pressure: why “better decisions” is a human problem

A good chunk of this reading list is really about leadership—how people make choices when information is incomplete, incentives are misaligned, and consequences are irreversible.

Warrior queens, pirates, and the myth of the “standard user”

Antonia Fraser’s The Warrior Queens and Colin Woodward’s The Republic of Pirates are not just colorful history. They’re reminders that power and conflict are shaped by unconventional actors and nonstandard constraints.

AI programs often imagine a clean user story: trained analysts, orderly workflows, consistent labels. Real operations include:

  • ad hoc teams
  • coalition partners
  • shifting authorities
  • political constraints
  • moral injury and fatigue

Design implication: build systems for messy reality.

Practical checklist for AI decision support:

  • Provide confidence ranges and alternative hypotheses, not single answers.
  • Log assumptions and data provenance automatically.
  • Make it easy to challenge the model (and socially acceptable to do so).

Intelligence failures are usually coordination failures

Steve Coll’s The Achilles Trap focuses on Saddam Hussein’s psyche and the cascade of misperception leading to the Iraq War. The uncomfortable lesson for AI-enabled intelligence analysis is that the biggest failures often happen between organizations and between interpretations.

Modern AI can accelerate collection and triage, but it can also:

  • harden consensus too early (“the model agrees”)
  • overweight what’s measurable
  • underweight context, deception, and political signaling

Actionable use:

  • Use AI to widen the aperture early (more hypotheses), then narrow late (structured debate).
  • Require explicit red-teaming for high-consequence assessments.
  • Separate “collection confidence” from “interpretation confidence.”

Don’t hand over command decisions to systems that can’t own consequences

Anthony King’s AI, Automation, and War argues that AI helps with data load and recommendations but lacks human judgment for policy and command decisions—especially under international law and the law of armed conflict.

I agree with the direction here: autonomy should be constrained by accountability. In public-sector AI, the bar isn’t just technical performance; it’s legitimacy under oversight.

A workable stance for many defense organizations:

  • Automate classification, triage, routing, and anomaly detection aggressively.
  • Use machine recommendations for planning options and resource allocation with strong guardrails.
  • Keep targeting decisions and escalatory choices under human authority with auditable decision records.

Innovation isn’t adoption: why institutions stall (and how to fix it)

Andrew Krepinevich’s The Origins of Victory is blunt: disruptive innovation wins only when doctrine, organization, and culture change with it. The book’s recurring obstacles—service rivalry, pride, inertia—describe today’s AI landscape with uncomfortable accuracy.

The procurement trap: buying tools instead of building capability

Public-sector AI often gets stuck in “vendor success theater”: pilots that never scale, dashboards that don’t change decisions, and models that can’t be updated because the contract treats retraining as a modification.

If you want AI in government to produce operational outcomes, treat it like a capability with lifecycle costs:

  • data stewardship
  • model monitoring
  • retraining authority
  • secure MLOps in classified environments
  • operator training and certification

How to evaluate an AI program like a strategist (not a technologist)

Several books in the list challenge simplistic theories of victory and the seduction of “magic bullet” solutions. Tim Benbow’s analysis of the “revolution in military affairs” is especially useful as an anti-hype tool.

Here’s a practical evaluation frame you can use in program reviews:

  1. Theory of advantage: What operational advantage does this system create (speed, accuracy, deception resistance, endurance)?
  2. Dependency map: What does it rely on (cloud access, comms, labeled data, contractor support, GPS timing)?
  3. Degradation plan: What happens when those dependencies fail?
  4. Human integration: Who is trained to use it, challenge it, and override it?
  5. Accountability chain: Who signs, who audits, who answers under scrutiny?

If the team can’t answer #3 and #5 crisply, the program isn’t ready for real missions.

People also ask: practical questions leaders have about AI and defense

Can AI replace analysts in intelligence work?

No. AI can replace chunks of workflow—translation, entity extraction, cross-document linking, anomaly detection—but analysis is judgment plus accountability. The goal is higher analyst throughput and better structured reasoning, not analyst removal.

Where does AI help most in maritime and naval operations?

Three high-return areas are maintenance forecasting, route and fuel optimization, and sensor fusion for maritime domain awareness. The constraint is rarely model quality; it’s data access, comms resilience, and coalition sharing rules.

How do we keep AI compliant with law of armed conflict expectations?

Treat compliance as a system requirement, not a policy memo. You need auditable logs, clear human decision points, pre-deployment testing, and defined authorities for override and shutdown.

Build your 2026 reading plan around the problems you actually have

Holiday reading lists are fun, but they’re also diagnostic tools. If you’re leading AI in a defense or public-sector environment, you don’t need more inspiration—you need better questions.

Here’s how I’d translate this list into an actionable reading plan:

  • If your challenge is AI-enabled command and control, prioritize books that stress judgment, friction, and adaptation (Ground Combat, Adaptation Under Fire, AI, Automation, and War).
  • If your challenge is maritime operations and Pacific strategy, lean into naval development and theater context (Annapolis Goes to War, The Pacific’s New Navies, works focused on China’s military history).
  • If your challenge is intelligence analysis and misperception, study failures and the human side of assessment (The Achilles Trap, plus decision-making and bias work like The Undoing Project).

This matters for the broader AI in Government & Public Sector series because defense AI isn’t a special case—it’s the most demanding case. If you can build accountable, resilient AI for national security, you can build it for public safety, emergency management, and critical infrastructure too.

If you’re planning next year’s AI investments, don’t start with a model. Start with a mission, a failure mode, and a chain of accountability. Then use history to pressure-test the story you’re telling yourself.

What would your AI program do on its worst day—when the data is wrong, the network is down, and the stakes are real?