Spy fiction and national security non-fiction offer a practical lens on AI in intelligence. Use these holiday reads to spot where AI helps—and where it backfires.

Holiday Spy Reads That Explain AI in Intelligence
A lot of AI programs in national security fail for a boring reason: the teams building them misunderstand the work. Not the tech—the work. They can describe models, metrics, and compute. But they can’t explain how an intelligence officer actually decides to trust a source, prioritize a lead, or brief an ambiguity to a policymaker without overstating it.
That’s why I like holiday reading lists from serious national security communities. The Cipher Brief’s annual recommendations aren’t just entertainment; they’re a cross-section of how practitioners think about HUMINT, diplomacy, counterterrorism, proliferation, and the psychology of decision-making under pressure. If you’re building, buying, or governing AI in defense & national security, these books are a practical shortcut: they show you the workflows AI is supposed to support—and the human judgment AI can’t replace.
Below is a curated, opinionated guide that uses The Cipher Brief’s holiday picks as a lens for AI in intelligence analysis, HUMINT tradecraft, and the real constraints that shape modern operations.
Spy fiction is a training aid for AI teams (seriously)
Spy novels get dismissed as “fun.” The good ones are closer to scenario training. They force you to think in sequences: spotting, assessing, recruiting, handling, tasking, reporting, and then surviving the second- and third-order effects. That’s the same end-to-end chain an AI system touches when it helps triage leads, summarize reporting, flag anomalies, or support mission planning.
The Cipher Brief highlights several fiction titles that are unusually useful for people working on AI-driven intelligence operations—because they depict the process instead of just the explosions.
HUMINT in fiction: why “data” is never just data
David McCloskey’s The Persian: A Novel is praised for bringing HUMINT discipline to life—how sophisticated services identify and recruit an agent, and how motivation and inner conflict shape everything after. That’s your reminder that HUMINT isn’t a dataset. It’s a relationship.
If you’re applying AI to support HUMINT, the most valuable outputs usually aren’t “predictions.” They’re:
- Decision support: what changed since last report, and why it matters
- Consistency checks: contradictions across reporting streams, timelines, and claimed access
- Risk signals: indicators of compromise, manipulation, or source distress
- Narrative compression: turning 30 pages of reporting into a briefable paragraph without losing caveats
Here’s the stance I’ll defend: If your AI product doesn’t make the case officer’s day measurably easier without pressuring them to overshare or over-collect, it’s not ready.
Mossad thrillers and the AI limits problem
Daniel Silva’s An Inside Job (Gabriel Allon returns, art theft, European corruption, and a newly elected Pope) is fast-paced fiction, but it’s also a lesson in multi-domain complexity. Real ops don’t come labeled “crime” or “politics.” They’re messy mixes of finance, influence, clandestine logistics, and human incentives.
AI can help connect dots across that mess—especially via entity resolution, document understanding, and network analysis. But there’s a catch: the more complex the setting, the easier it is for AI to hallucinate coherence.
Practical takeaway for AI governance teams: demand provenance. Every claim an AI system makes should be traceable to underlying sources and confidence levels that a human can interrogate.
Psychological realism matters more than plot
Yariv Inbar’s Behind the Trigger gets attention for its emotional and psychological realism. That’s a reminder for anyone deploying AI in national security: the threat isn’t only technical failure. It’s cognitive failure—automation bias, complacency, and decision fatigue.
A model that’s “right” 92% of the time can still be dangerous if it trains users to stop thinking. For human-centered design, I like a simple heuristic:
Build AI that argues like a good analyst: clear, sourced, and comfortable with uncertainty.
Proliferation thrillers as a red-team checklist
Brad Meslin’s debut The Moldavian Gambit revolves around a missing man-portable nuclear device in the chaos of the early 1990s. Fiction, yes—but it doubles as a red-team prompt: what would your organization do if fragmented reporting hinted at WMD movement through criminal routes?
AI can support counterproliferation by:
- Detecting weak signals across multilingual reporting
- Flagging suspicious procurement patterns
- Linking front companies, shipping routes, and intermediaries
- Prioritizing limited investigative resources
But the “missing nuke” plot also points to a hard truth: rare events break models. You can’t count on abundant training data. You need hybrid approaches: expert rules, simulation, anomaly detection, and human review.
The best non-fiction lesson: statecraft still sets the boundary conditions
A recurring theme in The Cipher Brief’s non-fiction picks is diplomacy—how negotiations, alliances, and state decisions shape what intelligence and military power can accomplish. AI doesn’t change that. It just changes the tempo.
Counternarcotics and the “diplomacy as force multiplier” reality
After Escobar (Feistl, Mitchell, Balboni) is framed as a story of taking down the Cali cartel, but the review emphasizes diplomacy as a force multiplier for law enforcement and the military.
Translate that into AI program language: the model isn’t the mission. Interagency authorities, partner access, legal frameworks, and trust determine what’s possible.
If you’re leading an AI initiative in defense or intelligence, ask early:
- Who owns the data, and who can share it?
- Which partners need visibility, and what must be compartmented?
- What’s the legal basis for collection, retention, and model training?
- What’s the escalation path when AI flags something sensitive?
Getting those answers wrong doesn’t cause “slower deployment.” It causes operational and political failure.
Great power diplomacy and AI-supported decision advantage
A. Wess Mitchell’s Great Power Diplomacy is described as part history lesson, part instruction manual—from Attila to Kissinger. The relevance to AI is straightforward: in great power competition, decision advantage comes from seeing reality clearly and acting coherently.
AI can improve clarity by compressing information overload. But it can also degrade clarity by:
- Overfitting to yesterday’s patterns
- Reinforcing institutional narratives
- Masking uncertainty behind fluent language
A strong practice for analytic organizations is to pair AI summarization with a required “uncertainty box,” including:
- what the system is confident about
- what is ambiguous
- what would change the assessment
That’s not paperwork. It’s how you keep speed from turning into self-deception.
History picks that map cleanly to modern AI risks
The Cipher Brief list includes multiple WWII-era and pre-WWI titles. They’re not nostalgia. They’re a warning label.
Intelligence successes still fail if leaders misread them
Tim Willasey-Wilsey’s The Spy and the Devil tells a story of deep access—penetrating Nazi circles before the war, even meeting Hitler. Jonathan Freedland’s The Traitor’s Circle shows how resistance networks can be betrayed despite courage and competence.
For AI in intelligence analysis, the parallel is uncomfortable: good collection doesn’t guarantee good outcomes. You can have accurate reporting and still fail through:
- misinterpretation
- politicization
- slow decision cycles
- distrust between organizations
AI can reduce friction in processing and dissemination, but it can’t force leaders to accept inconvenient truths. That’s an organizational design issue, not a model issue.
Remote violence, dehumanization, and the autonomy debate
Richard Overy’s Rain of Ruin is praised for explaining why conflicts persist: dehumanizing the enemy, depersonalizing killing by making attacks remote, and glamorizing combat.
That argument lands directly in 2025’s autonomous systems and AI-enabled targeting debates. Remote capability is tactically attractive. Strategically, it risks:
- lower political cost thresholds for using force
- moral distancing in kill chains
- feedback loops where escalation feels “manageable” until it isn’t
If you work on AI for ISR, targeting support, or mission planning, you need governance that treats psychological distance as a risk factor—right alongside model drift and cyber hardening.
The “future intelligence” books point to what’s next in 2026
The Cipher Brief also features forward-leaning titles that connect directly to today’s procurement and program decisions.
The AI reality: dominance requires boring excellence
Anthony Vinci’s The Fourth Intelligence Revolution argues that U.S. agencies must adapt aggressively to tech-driven competition. I agree with the urgency, but I’ll add a constraint: dominance isn’t about buying the flashiest model. It’s about the unglamorous system around it.
Teams that succeed in AI for defense tend to get four things right:
- Data readiness: labeling, lineage, access controls, and retention policies
- Operational integration: tools fit existing workflows and security boundaries
- Evaluation: mission-based tests, not only benchmark scores
- Resilience: red-teaming, adversarial robustness, and fallback procedures
Security clearance and the human bottleneck
Lindy Kyzer’s Trust Me: A Guide to Secrets is about the security clearance ecosystem. That’s a smart inclusion because it highlights the constraint every AI leader eventually hits: people with clearances and context are scarce.
A practical move I’ve seen work: treat cleared subject-matter experts as “precision resources” and use AI to remove low-value work—first drafts, sorting, cross-referencing—so humans spend time on judgment, not clerical load.
If your AI tool increases the cognitive burden on cleared staff (more alerts, more dashboards, more tuning), it’s a net loss.
A holiday reading plan that doubles as an AI strategy exercise
If you want a simple way to turn this reading list into real program value, try this with your team over the break:
- Pick one fiction and one non-fiction title from the list.
- For each, write a one-page memo answering:
- What are the main decision points in the story?
- What information is missing at each decision?
- Where would AI help, and where would it harm?
- What’s the failure mode: bias, deception, leakage, overconfidence?
- Compare memos across roles (analysts, operators, engineers, legal, acquisition).
You’ll quickly surface mismatched assumptions—exactly the stuff that derails AI deployments in national security.
Where this fits in the “AI in Defense & National Security” series
This series often focuses on the technology: ISR analytics, cyber defense, autonomous platforms, and mission planning. The holiday reading angle flips the lens. It starts with the human work—HUMINT tradecraft, diplomacy, resistance networks, prosecution, and statecraft—then asks how AI should support it.
If you’re evaluating AI-enabled intelligence tools in 2026 budgets, I’d use one standard: Does this system increase decision quality under real constraints—time, ambiguity, adversaries, authorities—and can we prove it without guesswork?
If you’re building or buying in this space and want a practical way to pressure-test an AI approach (especially around HUMINT support, analytic tradecraft, or mission workflows), I’m happy to share a checklist I use for evaluation and adoption.
Where are you seeing the biggest gap right now: AI model performance, data access, or the human workflow it’s supposed to fit into?