Holiday Reads on AI, Espionage, and Modern War

AI in Defense & National Security••By 3L3C

Curated holiday reads reframed for AI in defense: tradecraft, cyber, autonomy, and governance. Build a smarter 2026 national security AI plan.

AI governanceIntelligence communityDefense technologyCybersecurityAutonomous systemsCounterintelligence
Share:

Featured image for Holiday Reads on AI, Espionage, and Modern War

Holiday Reads on AI, Espionage, and Modern War

A lot of “AI in national security” commentary is either hype or fear. The quieter truth is more useful: AI is changing how intelligence gets collected, processed, and acted on—but the hardest problems are still human. That’s why I like holiday reading lists from national security outlets. They show what practitioners actually value: tradecraft, statecraft, decision-making under pressure, and the messy incentives inside organizations.

The Cipher Brief’s holiday recommendations (fiction and non-fiction) land at exactly that intersection. Even when a title isn’t “about AI,” the best ones teach you how AI will be used—because they explain the missions, constraints, and failure modes that AI systems will inherit.

This post reframes those recommended reads as a practical mini-curriculum for anyone working in defense AI, intelligence analysis, cybersecurity, autonomous systems, or policy—and for leaders who need to fund, govern, and deploy AI without getting surprised.

Why “recommended reads” matter for AI in national security

AI doesn’t replace the intelligence cycle; it accelerates it—and amplifies its weaknesses. If your collection is biased, your model will be biased faster. If your analysts are overwhelmed, automation can help triage—but it can also create a false sense of certainty.

Here’s what I’ve found when advising teams building or buying AI for defense and security: the real differentiator isn’t the model family, it’s whether the organization understands its own work well enough to automate parts of it safely.

Holiday reads are a shortcut to that understanding. The best books in The Cipher Brief’s list reinforce four themes that map directly to AI adoption:

  • HUMINT still matters (and AI won’t fix bad agent handling).
  • Cybersecurity is now an AI-versus-AI contest (detection, deception, and speed).
  • Statecraft and escalation management don’t have “retry buttons.”
  • Clearance, trust, and governance are not side quests—they’re deployment blockers.

Fiction that teaches the realities AI will collide with

Spy fiction is useful when it’s technically and psychologically honest. The Cipher Brief highlighted several fiction favorites that do something rare: they treat motivations, recruitment, and operational constraints as the plot engine. That’s exactly the layer AI teams tend to underestimate.

The Persian (David McCloskey): HUMINT tradecraft under modern pressure

If you want to understand what AI can’t “just automate,” start with HUMINT. The Persian is praised for bringing the recruitment and handling of an agent to life, including the inner conflicts that make sources unpredictable.

Why this matters for AI in intelligence:

  • AI can enrich targeting (network analysis, anomaly spotting), but it can’t create trust.
  • AI can summarize reporting, but it can’t judge a source’s emotional volatility in real time.
  • AI can flag inconsistencies, but it can’t interview a human being in a way that keeps them alive.

A practical takeaway: if you’re building AI for clandestine workflows, measure success by reduced analyst time-to-triage and improved prioritization, not by fantasies of automated agent operations.

An Inside Job (Daniel Silva): corruption, finance, and the “data exhaust” problem

Modern espionage is increasingly financial, bureaucratic, and reputation-based. An Inside Job runs through art theft, European corruption, and institutional power.

AI connection: these stories are basically about entity resolution (who is this really?), provenance (where did this come from?), and influence networks (who benefits?). That’s the same foundation used in real-world AI-driven investigations:

  • linking shell companies
  • detecting illicit finance patterns
  • tracking sanction evasion behaviors

If your organization is investing in AI-enabled investigations, don’t start with “predict the bad guy.” Start with clean identity graphs, transparent link logic, and audit trails.

Behind the Trigger (Yariv Inbar): the psychological layer most models ignore

Models are good at patterns; they’re bad at people. The review emphasizes emotional and psychological realism in espionage.

AI connection: human behavior modeling is where teams get tempted to overclaim. If you’re doing predictive analytics on insider risk or recruitment vulnerability:

  • treat outputs as leads, not verdicts
  • require human adjudication
  • instrument for false positive costs (burned careers, broken trust)

A hard opinion: in sensitive national security contexts, a “pretty good” model that nobody trusts is worse than no model—because it still changes behavior, only quietly.

The Moldavian Gambit (Brad Meslin): nuclear anxiety and edge-case catastrophes

High-consequence, low-frequency events are where AI needs the strictest guardrails. A missing man-portable nuclear device is the definition of a scenario where you can’t tolerate casual “model drift.”

AI connection: crisis decision support systems must be designed for:

  • incomplete data
  • adversarial deception
  • extreme uncertainty
  • escalation risk

If you’re deploying AI into WMD-related workflows, your model needs more than accuracy. It needs explainability, provenance, and a disciplined human-in-the-loop protocol.

Non-fiction that maps directly to AI-era security problems

The Cipher Brief’s non-fiction picks cover diplomacy, historical power management, counterterrorism, and the future of espionage. Read them as a blueprint for where AI will be used—and where it will fail if governance is weak.

Great Power Diplomacy (A. Wess Mitchell): statecraft is the “alignment problem” for leaders

AI can accelerate options; diplomacy decides which options are legitimate. Great Power Diplomacy is framed as both history lesson and instruction manual.

AI connection: national security leaders increasingly face AI-generated courses of action (COAs), forecasts, and “risk scores.” The danger isn’t that AI proposes something; it’s that leaders mistake speed for strategy.

Use this book as a reminder that:

  • coercion is rarely a single move
  • signaling matters
  • misreading the adversary creates self-inflicted crises

If you’re implementing AI in mission planning, require a “why this won’t work” section in every AI-assisted COA brief. It forces teams to confront assumptions.

No More Napoleons (Andrew Lambert): restraint and maritime power in an automated age

Restraint is a capability. The review highlights maritime power, restraint, and influence without overreach.

AI connection: autonomy at sea (and across ISR) tends to push toward persistent presence. That can be stabilizing—or provocative—depending on context.

If you’re working on autonomous systems, this is the strategic frame to keep nearby:

  • presence changes adversary decision cycles
  • surveillance changes bargaining power
  • friction at the tactical level can escalate strategically

The Spy and the Devil / The Traitor’s Circle: counterintelligence lessons for AI-driven orgs

Counterintelligence is the missing chapter in most AI deployment plans. These WWII-era intelligence histories are reminders that penetration, betrayal, and deception are not edge cases—they’re the playbook.

AI connection: any AI system that influences prioritization, targeting, or resource allocation is a CI target. Expect:

  • data poisoning (training or feedback loops)
  • prompt injection (for LLM-based tools)
  • synthetic personas and fabricated reporting
  • confidence laundering (bad intel packaged as “model output”)

Actionable step: create a lightweight CI checklist for your AI pipeline:

  1. Who can inject data?
  2. What’s the audit trail for changes?
  3. How are anomalies investigated?
  4. What’s the rollback plan?

Rain of Ruin (Richard Overy): remote killing and the moral distance problem

When attacks get more remote, accountability has to get more explicit. The review underscores dehumanization, depersonalized killing, and glamorized combat.

AI connection: autonomy and algorithmic targeting increase psychological distance. That increases the importance of:

  • rules of engagement clarity
  • proportionality review mechanisms
  • post-strike assessment integrity
  • oversight that can withstand political pressure

If you’re building defense AI, don’t treat ethics as a slide deck. Build it into workflows: approvals, logging, and after-action review.

Race Against Terror (Jake Tapper): the legal layer AI teams forget

Counterterrorism is as much legal process as it is operations. The book focuses on prosecutors pursuing justice over years.

AI connection: when AI supports investigations, discovery and evidentiary standards matter. If your tool can’t explain why it flagged a person, it’s not just an ML problem—it’s a case integrity problem.

A practical procurement question: “Can this system produce court-defensible artifacts?” If the vendor can’t answer, keep shopping.

The Fourth Intelligence Revolution (Anthony Vinci): the book that argues for dominance

The thesis is blunt: intelligence agencies must dominate technological change, not merely adapt. Whether you agree or not, it’s the right pressure test.

AI connection: dominance doesn’t mean “buy more tools.” It means:

  • building internal talent pipelines
  • integrating AI into the intelligence cycle end-to-end
  • hardening against adversarial AI
  • governing models like critical infrastructure

My stance: agencies that treat AI as an IT refresh will be outpaced by adversaries who treat AI as operational doctrine.

Trust Me (Lindy Kyzer): clearance, trust, and why deployment fails in practice

Security clearance and trusted handling are deployment gates. If your AI program relies on contractors who can’t get cleared, or on data that can’t be accessed in the right environment, timelines blow up.

AI connection: the clearance process isn’t just HR overhead. It shapes:

  • who can label data
  • who can evaluate model outputs
  • who can operate systems in classified settings
  • who can respond to incidents

If you’re planning an AI program for national security, budget time for the human pipeline: clearances, training, and retention.

A practical holiday reading plan for defense AI leaders

You don’t need 12 books to get value. You need the right sequence. Here’s a simple plan mapped to real roles.

If you lead AI strategy or procurement

Read for governance and decision-making:

  • Great Power Diplomacy (strategic discipline)
  • Trust Me (deployment reality)
  • The Fourth Intelligence Revolution (organizational ambition)

If you build or secure AI systems (cyber + MLOps)

Read for adversaries and deception:

  • The Spy and the Devil or The Traitor’s Circle (CI mindset)
  • Rain of Ruin (accountability for remote effects)

Then pressure test your own stack:

  • Do we have logging good enough for post-incident analysis?
  • Can we detect manipulation in feedback loops?
  • Is our human-in-the-loop real, or ceremonial?

If you’re an analyst or operator adopting AI tools

Read for tradecraft and human factors:

  • The Persian (HUMINT reality)
  • Race Against Terror (process, persistence, and constraints)

Then set your personal standard: you’re not competing with AI. You’re competing with people who know how to use AI without being misled by it.

What to do next if your team is “AI-curious” but stuck

If you’re stuck, it’s usually one of three things: data access, governance, or trust. Books won’t fix that—but they can help you name the problem quickly.

Here’s a simple next step I recommend before Q1 planning locks:

  • Pick one workflow (intel triage, cyber alerting, targeting support, mission planning).
  • Define a single metric that matters (time-to-triage, false positive rate, analyst throughput, incident containment time).
  • Identify the human decision point AI will support (not replace).
  • Decide what evidence you’ll require before scaling (audit logs, error analysis, red-team results).

Holiday reading is great. Holiday clarity is better.

The AI in Defense & National Security series is ultimately about one question: How do you field AI that improves security without creating new strategic liabilities? These books—especially the ones focused on counterintelligence, diplomacy, and trust—push you toward that answer.

If you’re building an AI roadmap for 2026, what’s the one national security workflow you’d automate first—and what would you refuse to automate no matter how good the model looks?

🇺🇸 Holiday Reads on AI, Espionage, and Modern War - United States | 3L3C