Learn how Russian leak-driven info ops spread fast—and how AI-powered intelligence analysis helps detect, map, and disrupt disinformation early.

AI vs. Russian Info Ops: Detect Leaks Before They Spread
A single “accidental” sighting at a Georgetown restaurant helped spark days of headlines, speculation, and political accusations in the U.S. The story wasn’t that Russian intelligence officials visited Washington—those kinds of contact channels have existed on and off for decades. The story was how fast a small, plausibly deniable leak turned into a larger narrative that undermined trust.
That’s the part many teams still underestimate: modern information operations don’t need a big lie to work. They need timing, amplification, and an audience primed to interpret fragments as proof.
This case—shared by former CIA Senior Executive Glenn Corn—reads like a field manual for how Russia’s services exploit domestic polarization. For leaders working in defense, national security, cyber defense, or public-sector communications, it also points to a practical requirement: AI-powered intelligence analysis has to be part of your operational stack, because the volume, velocity, and cross-platform spread of influence activity has outpaced purely human monitoring.
The “leak” is the operation: what happened in 2018
The core point is straightforward: Russian intelligence leadership (SVR first, then FSB) conducted tightly controlled visits to Washington in January 2018 for counterterrorism discussions, with both sides agreeing to minimal publicity. According to Corn, the SVR chief Sergey Naryshkin later indicated that a Russian journalist “happened” to be at the same restaurant and would “probably write a story.” Soon after, media reports circulated.
The reporting did what influence operations often rely on:
- It introduced uncertainty (who met whom, and where?)
- It added unverifiable insinuations (possible White House/NSC contacts)
- It included at least one false claim (that the GRU chief was present)
- It magnified political suspicion in an already volatile environment
Corn’s argument is worth repeating in plainer language: even if the initial leak contained no explicit falsehood, it can still be an information operation if the intent is to inflame tensions, weaken confidence in institutions, and exploit pre-existing bias.
Why this matters in late 2025
As 2025 closes, the environment is even more favorable to adversarial influence than it was in 2018:
- Cheap synthetic media lowers the cost of plausible “evidence.”
- Recommendation algorithms reward outrage and “breaking” speculation.
- Fragmented audiences make it harder to issue a single corrective narrative.
- Holiday news cycles (late November through early January) often mean thinner staffing—exactly when an adversary can seed confusion and let it run.
This is why the “AI in Defense & National Security” conversation can’t stay abstract. Influence defense is operational work: monitoring, triage, attribution signals, response playbooks, and leadership decision-making under time pressure.
Why Russian information operations work so well (and why teams misread them)
Russian-style information operations succeed because they’re designed around human cognition, not “facts.” They target emotion first, verification last.
The real payload is distrust
In this case, the payload wasn’t “the U.S. secretly colluded.” The payload was:
- Your institutions are hiding something.
- Your leaders can’t be trusted.
- The system is rigged, corrupt, or incompetent.
That message travels even when specific details are later corrected.
Here’s the thing I’ve found working with security teams: organizations spend too much time arguing about whether a claim is disinformation and not enough time assessing whether the narrative is being shaped to force a damaging interpretation.
“Operational combinations” and narrative scaffolding
Corn references a classic Russian concept often described as an “operational combination”—a coordinated set of moves where:
- A real event provides legitimacy (a sanctioned visit)
- A controlled leak provides ignition (restaurant “sighting”)
- Rumors/speculation provide expansion (who else was involved?)
- Polarized actors provide amplification (domestic factions do the work)
That third step—narrative scaffolding—is where AI-enabled detection can help the most. Humans can spot a lie. Machines can spot patterned amplification across platforms at scale.
Where AI-powered intelligence analysis changes the math
AI doesn’t replace analysts in information warfare. It gives them time back and improves signal-to-noise. The goal is earlier detection, faster prioritization, and more disciplined responses.
1) Detecting abnormal narrative growth before it becomes “the story”
A practical, measurable objective is: identify emerging narratives within the first 30–120 minutes of cross-platform pickup.
AI systems can support this by:
- Clustering near-duplicate claims and paraphrases (even when wording changes)
- Flagging sudden increases in repost velocity (“burst” behavior)
- Identifying coordinated posting patterns (timing, repetition, account creation spikes)
- Detecting cross-language propagation (Russian → English → local outlets)
This is the difference between responding when a rumor is niche versus when it’s already been mainstreamed.
2) Mapping the influence supply chain
The biggest operational advantage AI can provide is provenance and pathway mapping—not perfect attribution, but a useful answer to: How did this narrative travel?
An AI-assisted workflow can produce a map of:
- First observed mentions
- Top amplifiers (accounts, communities, outlets)
- Bridge nodes (who carried it from one audience to another)
- “Narrative mutations” (how the claim changed as it spread)
That map helps decision-makers avoid a common mistake: responding to the loudest node instead of the most central node.
3) Rapid credibility scoring for claims, not sources
Influence defense often fails when teams think in binary: credible vs not credible. A stronger posture uses graded confidence and focuses on claims.
AI can assist by scoring a claim against:
- Known timelines (does it conflict with verified travel/meeting windows?)
- Consistency across independent reporting (do details converge or diverge?)
- Linguistic markers of speculation (modal verbs, insinuation patterns)
- Reuse of known narrative templates (e.g., “secret meeting,” “backchannel,” “sources say”)
You’re not automating truth. You’re automating triage.
Snippet-worthy rule: The faster a claim spreads, the more your response must prioritize clarity over completeness.
A practical playbook: AI-enabled defense against “leak-driven” info ops
If you’re responsible for national security communications, cyber threat intelligence, protective security, or mission planning, this is the operational question: What do we do Monday morning?
Step 1: Build an “influence early warning” lane next to cyber alerts
Most orgs have SIEM alerts and incident queues. Few have an equivalent for information operations.
Create a lightweight lane with:
- A monitored set of keywords/entities (leaders, units, facilities, operations)
- Narrative clustering dashboards (what themes are emerging?)
- A severity model (reach velocity + operational sensitivity)
- An on-call rotation during known high-risk periods (elections, crises, holidays)
Step 2: Pre-author “truth scaffolds” for predictable scenarios
In Corn’s case, the core correction was procedural: the visit was coordinated, interagency-cleared, and not unusual in concept.
You can pre-author modular statements that cover:
- What the event was (and wasn’t)
- What oversight existed (interagency coordination, approvals)
- What is unknown (and what’s being reviewed)
- Where updates will be posted
This avoids the scramble where every word becomes a political football.
Step 3: Use AI to recommend response options, not write the response
AI can propose:
- The top 3 misunderstandings driving the narrative
- The audiences most exposed (by geography, community, language)
- The best channel mix (press, social, internal memo, partner brief)
- The likely next mutation of the claim
But keep the final message human-owned. Influence operations are about trust; outsourcing your voice to automation is a self-own.
Step 4: Treat “amplification” as a hostile act—even when content is mostly true
This case is a reminder that adversaries can win with real facts arranged maliciously.
Operationally, that means your team should track:
- Coordinated timing patterns
- Repeated insinuations that survive corrections
- Persistent “open loops” (questions designed to linger)
If you only label disinformation when it’s demonstrably false, you’ll miss half the battlefield.
People also ask: quick answers for decision-makers
Is information operations work primarily a cyber problem or a comms problem?
It’s both. Cybersecurity handles platform abuse, bot networks, and compromised accounts. Strategic communications handles narrative response and trust preservation. AI helps unify them with shared detection and common operating pictures.
Can AI reliably attribute an operation to a nation-state?
Not on its own. AI is best at pattern detection and anomaly spotting. Attribution still requires intelligence tradecraft, contextual knowledge, and often classified sources and methods.
What’s the biggest mistake leaders make during an information operation?
They wait for certainty. In fast-moving influence events, delay is interpreted as guilt and fills the vacuum with speculation.
What this case study should change in your 2026 plans
Corn’s story is a clean illustration of a hard truth: an adversary can spend almost nothing and still trigger a high-impact domestic fight if they understand your fault lines. Russia didn’t need to forge documents here. A small leak plus predictable outrage did the job.
If you’re building an “AI in Defense & National Security” roadmap for 2026, put influence defense on the same tier as endpoint security and threat hunting. Fund it. Staff it. Integrate it with cyber and intel workflows. And measure it with the same seriousness.
A useful next step is an internal exercise: pick one sensitive but routine event (a foreign delegation visit, a procurement decision, a base incident). Then simulate a leak-driven narrative surge and test whether your team can detect it in under two hours, brief leadership in under four, and publish a stabilizing statement in under eight.
The question worth ending on: if a low-effort “restaurant leak” could bend headlines for days, what happens when the next operation arrives with synthetic media, coordinated amplification, and a holiday-week staffing gap?
This post is part of our “AI in Defense & National Security” series, focused on how AI strengthens intelligence analysis, cyber defense, and mission resilience under real-world pressure.