AI Counterintelligence Against Kremlin “Wet Work”

AI in Defense & National Security••By 3L3C

AI counterintelligence can detect Russian “wet work” patterns earlier. Learn practical workflows for AI-driven intelligence analysis and protective security.

AI in national securityCounterintelligenceRussiaThreat detectionProtective securityIntelligence analysis
Share:

Featured image for AI Counterintelligence Against Kremlin “Wet Work”

AI Counterintelligence Against Kremlin “Wet Work”

A convicted assassin steps off a plane and gets a public embrace from the head of state. That single image—broadcast, replayed, and memed—does more than celebrate a man. It signals doctrine.

The 2024 spy swap that returned FSB killer Vadim Krasikov to Russia (after his German conviction for the 2019 Berlin murder of Zelimkhan Khangoshvili) wasn’t just a diplomatic trade. It was a recruitment poster for state violence: kill for the regime, and the regime will take care of you.

For national security teams, this matters for a practical reason: Russia’s pattern of targeted killings, poisonings, and coercive hostage diplomacy creates a persistent operational threat across Europe and beyond. And it’s exactly the kind of threat that modern AI-driven intelligence analysis should be built to detect—early, at scale, and with enough confidence to trigger real-world protective action.

The real signal: state-sponsored murder as policy

Russia’s targeted killings aren’t “outliers” or “rogue actors.” They’re part of a long-running culture inside Russian intelligence services—where assassination is treated as a legitimate tool of statecraft.

Historically, the practice reaches back through Czarist security services, the Cheka, and the KGB. In the Soviet era, the system professionalized covert murder with specialized labs, tradecraft, and a rewards structure that celebrated loyalty and punished defection with finality. Under Vladimir Putin, that tradition has become more visible, more assertive, and—crucially—more performative.

Two features define the modern pattern:

  • Deliberate signaling. Using rare, state-linked methods (for example, nerve agents associated with Russian state capacity) trades deniability for intimidation.
  • Institutional validation. Public praise, promotions, and protection communicate that these acts are not merely tolerated—they’re valued.

If you’re running counterintelligence, force protection, executive security, or insider-threat programs, the takeaway is simple: Russia’s “wet work” isn’t only about removing individuals. It’s about shaping behavior across entire communities—defectors, dissidents, journalists, diaspora networks, and anyone considering cooperation with the West.

Why attribution keeps lagging—and why AI can change that

Attribution in hostile-state operations often fails for an uncomfortable reason: many organizations still look for “courtroom proof” before they act.

That’s a losing standard in intelligence and protective security.

Targeted killings and attempted assassinations typically unfold across jurisdictions, use intermediaries, and exploit the seams between law enforcement, intelligence, and private-sector security. By the time a case is fully proven, the adversary has already achieved the main objective: deterrence by fear.

The AI advantage: pattern recognition under uncertainty

AI doesn’t replace investigators. It changes the timeline.

Well-designed AI in defense and national security programs excel at three things that traditional workflows struggle with:

  1. Fusing weak signals across many sources (travel anomalies, communications metadata, financial friction, surveillance indicators, incident reports).
  2. Detecting repeatable tradecraft (the “signature” of an actor or unit) even when each incident looks different on the surface.
  3. Prioritizing action by estimating risk fast enough to deploy protective measures.

This is where modern intelligence systems should be heading: continuous, machine-assisted threat hunting for state-sponsored threats, not one-off analysis after the damage is done.

What “good” looks like in practice

A credible AI-enabled counterintelligence pipeline doesn’t claim magical certainty. It produces ranked hypotheses and explainable indicators that analysts and operators can act on.

For example, an AI system supporting protective security might flag:

  • A cluster of short-notice arrivals from specific transit routes
  • Device behavior consistent with operational security (burners, limited contact graphs)
  • Repeated proximity to a protected person’s known routines
  • Purchases, rentals, or accommodation patterns matching prior operations

None of those proves intent. Together, they can justify tightening access controls, altering routes, increasing surveillance, or initiating interagency coordination.

The modern “kill chain” is informational—and that’s where AI fits

Assassination plots don’t begin with a weapon. They begin with information.

The operational sequence typically includes:

  1. Target selection (who matters, who’s vulnerable, who sends a message)
  2. Access planning (travel, cover, surveillance, procurement)
  3. Shaping the environment (disinformation, intimidation, coercion of contacts)
  4. Action (attack, poisoning, “accident,” disappearance)
  5. Narrative control (denial, confusion, alternate explanations)

AI can disrupt multiple stages—especially the parts that depend on repeated, scalable reconnaissance.

Stage disruption: where AI creates real friction

Detection (Stages 2–3):

  • Computer vision can identify suspicious route surveillance patterns around facilities.
  • Anomaly models can surface “too-clean” digital behavior (the absence of normal signals).
  • Entity resolution can connect near-matches in names, documents, and identities across datasets.

Protection (Stages 2–4):

  • AI-informed scheduling and routing reduces predictability for high-risk individuals.
  • Real-time risk scoring helps allocate limited protective teams where risk is highest.

Attribution and deterrence (Stage 5):

  • Fast, defensible analytic narratives limit the effectiveness of denial and confusion.
  • Rapid exposure increases political and operational costs for the adversary.

Here’s the thing about deterrence: it’s not only about punishment after the fact. It’s also about making operations harder to run and easier to expose. AI can contribute directly to both.

What organizations should build now (and what to avoid)

Most organizations get this wrong by buying “AI tools” before they’ve built the operating model that makes AI useful.

A strong AI counterintelligence posture is less about models and more about repeatable workflows.

A practical blueprint for AI-enabled counterintelligence

  1. Define the mission outcomes (not the features).

    • Examples: “reduce time to detect hostile surveillance,” “increase protective posture lead time,” “improve cross-case linkage.”
  2. Create a minimum viable data fusion layer.

    • Normalize incident logs, badge/access data, travel feeds (where lawful), security reports, open-source reporting, and internal case notes.
  3. Build an analytic triage loop.

    • The model flags, analysts validate, operators act, and outcomes feed back into the system.
  4. Prioritize explainability over novelty.

    • If an analyst can’t explain why a flag was raised, it won’t drive action.
  5. Institutionalize red teaming and adversarial testing.

    • Assume a capable adversary will probe your thresholds and exploit blind spots.

Common failure modes (and why they’re dangerous)

  • Over-automation. If alerts go straight to action without human review, you’ll burn trust fast.
  • Under-governance. AI systems in national security contexts must be auditable, access-controlled, and legally compliant.
  • Single-source dependency. Reliance on one data stream is brittle; fusion is the point.
  • “Perfect attribution” thinking. Waiting for certainty is how you lose initiative.

Case-study lens: what the Kremlin wants you to learn

Publicly celebrating an assassin is not subtle. It’s meant to teach three lessons:

  1. To Russian officers: defection isn’t survivable.
  2. To exiles and dissidents: distance won’t protect you.
  3. To Western governments: escalation will be tested in the gray zone.

The operational implication for the West—especially in Europe during winter travel surges, high-profile diplomatic events, and predictable holiday routines—is that threat windows widen. More movement, more crowds, more noise. That helps an adversary hide.

AI systems thrive in exactly this environment, but only if they’re tuned for operations, not demos.

A useful rule: if your AI can’t help an analyst or protective team make a decision within hours—not weeks—it’s not countering “wet work.” It’s filing paperwork.

What to do next: turning analysis into readiness

If your organization supports defense, intelligence, diplomatic security, or critical infrastructure protection, treat state-sponsored assassination as a repeatable threat type, not a series of shocking headlines.

Three concrete next steps that work in the real world:

  1. Build a “tradecraft library” your AI can learn from. Codify indicators from historic cases: surveillance behaviors, travel patterns, cover discipline, procurement habits, narrative tactics. Turn lessons into machine-readable features.

  2. Run a 90-day pilot focused on one operational question. Pick a measurable outcome like “time from report to prioritized risk assessment” or “cross-case linkage rate.” Don’t boil the ocean.

  3. Align your AI program with action owners. The best model in the world is useless if it doesn’t map to who changes routes, who increases patrols, who contacts partners, and who documents decisions.

This post sits in our AI in Defense & National Security series for a reason: the future of counterintelligence won’t be won by having more data. It’ll be won by turning data into decisions fast enough to protect people and deter adversaries.

Russia’s message is that “wet work” is part of the toolkit. The West’s answer should be equally clear: hostile tradecraft can be detected, exposed, and made costly—before it succeeds.

If you’re building or modernizing an AI-driven intelligence analysis capability, where are you strongest today: data fusion, analytic workflow, or operational response?