AI Threat Modeling: The Red Flags We Missed in 2022

AI in Defense & National Security••By 3L3C

AI threat modeling can reduce strategic surprise by treating leader entrenchment as a risk multiplier, not background context. Build warning systems that flag broken assumptions early.

AI in defensethreat modelingintelligence analysisstrategic warningRussia-Ukraine warrisk assessment
Share:

Featured image for AI Threat Modeling: The Red Flags We Missed in 2022

AI Threat Modeling: The Red Flags We Missed in 2022

A lot of smart people misread the biggest European war in generations.

In the weeks before Russia’s February 2022 full-scale invasion of Ukraine, public commentary across governments, think tanks, and major media leaned heavily toward “probably not.” The logic seemed straightforward: sanctions would bite, casualties would mount, and Moscow would be stuck occupying a hostile country. A rational leader wouldn’t pay that price.

That logic wasn’t foolish. It was incomplete.

The missing variable was leader entrenchment—the degree to which a long-tenured leader has reshaped elites and institutions so that dissent becomes expensive and coordination against him becomes difficult. When entrenchment is high, the normal “domestic constraints” model breaks. And that’s where this becomes highly relevant to the AI in Defense & National Security conversation: if our threat assessments rely on assumptions that don’t hold in edge-case regimes, we need analytic systems that surface those broken assumptions early—consistently, audibly, and with evidence.

Why traditional risk models underweighted invasion probability

The most common analytic error wasn’t ignoring Russian troop movements or rhetoric. It was over-trusting a generic cost–benefit lens that assumes leaders are meaningfully constrained by insiders.

The “checks and balances” assumption sneaks into a lot of analysis

Many forecasts implicitly treated Russia like a typical authoritarian system: elites, oligarchs, security bosses, and bureaucratic operators constrain the leader because they control resources, legitimacy, and implementation.

That’s often true—especially early in a leader’s tenure.

But by 2022, Vladimir Putin had spent over two decades doing the things entrenched leaders do:

  • Recomposing the winning coalition (promoting loyalists; sidelining rivals)
  • Raising the price of dissent (legal, financial, and coercive tools)
  • Reducing elite coordination (fragmenting power centers; keeping insiders dependent)
  • Centralizing narrative control (state-aligned media and messaging discipline)

When those conditions are in place, the leader’s political “budget” for risky moves expands. The state may still pay enormous costs, but the leader’s personal cost of failure is delayed, diluted, or deflected.

A clean way to say it:

High entrenchment turns “irrational” state actions into survivable leader choices.

Why analysts kept returning to “he won’t do it”

Even when warning signs stacked up, many assessments defaulted to: “It’s too self-defeating.” That’s a rational response if you assume elite constraints, accurate feedback, and internal accountability.

Entrenchment erodes all three.

  • Elite constraints weaken because careers and safety depend on compliance.
  • Feedback degrades because subordinates filter bad news.
  • Accountability becomes performative—public loyalty displays matter more than internal debate.

The result is a system that can execute extreme options—war, rapid escalation, abrupt strategic realignment—without the usual internal braking mechanisms.

Entrenchment is a risk multiplier—and AI should treat it that way

Leader entrenchment isn’t a vibe. It’s an analytic variable that can be operationalized.

The practical lesson from the 2022 miss is that tenure and elite management behaviors should raise baseline risk—even when the “objective” economic and military costs look prohibitive.

A simple model shift: from “state rationality” to “leader survivability”

A lot of forecasting still anchors on the question: Does this action serve national interest given likely costs?

A better first diagnostic in entrenched regimes is:

  1. Can the leader survive the costs politically?
  2. Can the leader suppress or delay blame?
  3. Is the information environment distorted enough to enable overconfidence?

If the answer is “yes,” the action becomes more plausible—even if it’s strategically terrible.

This isn’t just about Russia. It generalizes across regions and regime types. Long-tenured leaders can exist in democracies and authoritarian systems alike; what matters is whether tenure correlates with:

  • frequent or high-profile elite purges
  • increasing security-service dominance in top posts
  • tightening media control and censorship enforcement
  • rule changes that reduce genuine competition
  • “loyalty theater” events where officials must publicly align

What AI can do better than humans in this specific problem

Humans are good at narratives. We’re worse at consistently weighting structural indicators when the day-to-day news cycle is noisy.

AI systems—used responsibly—can help because they’re strong at:

  • feature consistency (the model doesn’t “forget” entrenchment because a new headline feels salient)
  • pattern detection across time (slow consolidation trends become visible)
  • cross-source triangulation (state media signals + personnel changes + coercive events)
  • scenario stress-testing (what changes if the leader’s domestic constraints are near-zero?)

The point isn’t that AI “predicts invasions.” The point is that it can flag when our default assumptions are likely wrong, which is often the real failure mode.

Building AI-driven warning systems that don’t miss the structural signals

If you’re deploying AI for intelligence analysis, early warning, or strategic planning, you need more than a model that ingests troop movements and economic indicators. You need a model that treats political structure as data.

1) Turn entrenchment into a measurable index

An “entrenchment index” doesn’t need to be perfect to be useful. It needs to be consistent and explainable.

A practical index can combine:

  • Tenure (years in power; continuity through proxies)
  • Elite churn (rate of removals/reassignments among top officials)
  • Security capture (share of key roles held by security-service-linked figures)
  • Repression signals (new laws, enforcement spikes, high-profile prosecutions)
  • Narrative control (censorship events, outlet closures, message uniformity)

Each component can be scored quarterly. The output isn’t a prophecy; it’s a risk posture input.

If entrenchment rises while coercive capacity rises, the probability of extreme options rises.

2) Fuse “capability” signals with “permission structure” signals

Classic warning frameworks heavily weight capability: logistics, readiness, deployments, munitions, exercises.

But capability is only half the story. The other half is permission structure—whether the leader can choose and sustain high-risk actions.

AI-enabled fusion can explicitly pair:

  • Troop massing + elite purges
  • Escalatory rhetoric + censorship enforcement
  • War-prep logistics + loyalty theater events

This creates alerts that read like: “Capability is increasing, and internal constraints are decreasing.” That’s a different—and sharper—warning than “troops are moving again.”

3) Detect echo-chamber risk inside decision systems

Entrenchment often produces an information pathology: subordinates report what the leader wants to hear.

AI can’t see into a leader’s mind, but it can detect external markers of internal distortion, such as:

  • abrupt message discipline shifts across state-aligned channels
  • synchronized narratives that deny obvious realities
  • shrinking diversity of elite statements (fewer dissenting frames)
  • institutional changes that centralize intelligence vetting

In practical terms, you’re trying to estimate the probability that decision-makers are operating with filtered inputs—which raises the chance of miscalculation.

4) Make outputs usable for commanders and policymakers

If the tool can’t be briefed in five minutes, it won’t matter.

The best AI-assisted threat modeling outputs I’ve seen share three traits:

  • Transparent drivers: “Risk increased because A, B, and C shifted.”
  • Comparable baselines: “This month resembles prior pre-escalation patterns more than normal posture.”
  • Decision hooks: “If you’re deciding X, here’s what changes under high entrenchment assumptions.”

That last part is essential. Decision-makers don’t need a black-box score. They need actionable deltas.

What defense teams should do differently in 2026 planning cycles

December 2025 is a good moment for recalibration because the lesson from 2022 has aged well: structural political variables don’t belong in the footnotes. They belong in the model.

Here are changes worth baking into 2026 force planning, intelligence requirements, and analytic tradecraft.

Update analytic tradecraft: “constraints” are not default

Analysts should stop assuming domestic constraint as the baseline for authoritarian leaders with long tenure. Treat constraint as a hypothesis to be tested, not a given.

A practical checklist for high-entrenchment regimes:

  1. Who can credibly say “no” to the leader?
  2. How costly is elite coordination?
  3. How quickly can the leader punish defection?
  4. What’s the track record of elite removals?
  5. How controlled is the information environment?

If those answers point toward high discretion, raise the probability of extreme options.

Use AI to pressure-test the “obvious” interpretation

When consensus forms around “they won’t,” that’s exactly when AI-driven red-teaming can help. Not because the machine is smarter, but because it can be unpopular without career risk.

Operationally, that means running:

  • scenario simulations with “low constraint leader” parameters
  • alternative utility functions (leader survival vs national welfare)
  • sensitivity analysis showing which assumptions dominate the forecast

If one assumption (like elite constraint) is doing most of the work, that’s a warning sign by itself.

Build an “extreme option watchlist” for entrenched leaders

War is one extreme option. Others matter just as much in national security:

  • rapid mobilization or partial mobilization decisions
  • sudden treaty withdrawals or basing realignments
  • escalatory cyber operations against civilian infrastructure
  • nuclear signaling shifts
  • internal crackdowns that change regional stability dynamics

An AI-enabled watchlist can connect structural signals to specific extreme-option playbooks, which supports faster, more disciplined response planning.

The real lesson from 2022: interests aren’t the whole story

The most useful sentence to carry forward is simple: the question isn’t only what a state’s interests are; it’s who gets to define them.

When a leader becomes entrenched, state interests can be reframed around personal worldview, legacy, and perceived historical destiny—then executed through institutions designed to comply.

For teams working on AI in defense and national security, this is a concrete assignment: build systems that don’t just count tanks and track exercises, but that also model political permission structures—tenure, elite control, repression signals, and information distortion.

If you’re responsible for early warning, collection priorities, or operational planning, a good next step is a focused review: Where do our models assume constraint, and what happens if we remove it? That one exercise often changes the risk picture more than another month of indicators.

The forward-looking question worth sitting with as we enter 2026: Which current threat assessments would flip if we treated leader entrenchment as a primary driver, not background context?