Iran’s “dormant” Prince of Persia APT is active again. Learn how AI-powered threat detection spots stealthy C2, DGAs, and quiet espionage.

AI Detection vs Iran’s “Dormant” APTs in 2025
Most security teams still treat “quiet” threat actors as “gone.” Prince of Persia (aka Infy), one of the oldest known Iranian APTs, proves why that assumption keeps failing. Researchers now say the group has operated in some form since 2004—and despite years of low public visibility, it’s been actively spying on dissidents and targets across multiple countries.
This matters in defense, national security, and any organization that supports vulnerable communities (media, NGOs, universities, government contractors). The lesson isn’t just “Iran has capable hackers.” It’s that long-lived espionage programs optimize for patience, stealth, and infrastructure resilience—exactly the conditions where traditional alerting and signature-based controls struggle.
Here’s the practical angle for this installment of our AI in Defense & National Security series: when an APT invests in stealthy command-and-control (C2), domain generation, and selective tooling, AI-powered threat detection becomes less of a nice-to-have and more of the only scalable way to spot weak signals early.
“Dormant” doesn’t mean inactive—it means hard to see
A realistic definition of an advanced persistent threat isn’t “they’re advanced.” It’s they’re persistent. APT operators can go quiet for long stretches, rotate infrastructure, and reduce noisy operations—without stopping.
Prince of Persia’s recent reporting highlights a pattern I’ve seen repeatedly: the most dangerous activity often looks like “nothing” from the perspective of conventional SOC metrics. Few malware hits. Minimal detections. Low-volume beaconing. A victim set that’s intentionally narrow.
Why state-backed operators can afford to play the long game
State-aligned teams don’t need immediate monetization. They don’t need to hit 10,000 endpoints a day. They can:
- Spend months refining phishing lures and delivery methods
- Run triage-style first-stage implants and abandon most infections
- Build C2 designs that survive takedowns and research pressure
- Prioritize intelligence collection over disruption
That operating model breaks common “success metrics” like counting malware blocks or tallying prevented exploits. You need detection approaches that treat small anomalies as first-class signals.
What makes Prince of Persia a modern problem (even with old roots)
The most instructive part of the recent research isn’t nostalgia about a long-running group. It’s the way their tooling and infrastructure choices reduce defender leverage.
Two custom malware families are central to the reporting: Foudre (lightweight first stage) and Tonnerre (heavier espionage tooling). Think of this as a disciplined pipeline: identify promising victims cheaply, then escalate only where it’s worth the risk.
First-stage triage malware is built to waste your time
Foudre’s job is simple: collect basic host information, check if the victim is valuable, and either escalate or self-delete.
That “self-destruct on non-targets” behavior is a quiet but important point: it reduces forensic artifacts and shrinks your sample set, which makes detection engineering harder. It also means many victims may never realize they were touched.
From a defender’s perspective, this creates a mismatch:
- You’re trying to find indicators across the whole enterprise
- The attacker is willing to leave almost no trace on 90% of endpoints
AI-based endpoint and network analytics helps here by learning what “normal” looks like per user/device, then flagging low-frequency behaviors that signatures ignore.
Resilient C2: Telegram and selective key delivery
Tonnerre’s newer behavior reportedly includes use of the Telegram API for command-and-control via private groups. Using consumer platforms for C2 isn’t new; the implementation detail is.
Instead of embedding an API key inside the malware (a common mistake that lets defenders extract and hunt), the actor reportedly retrieves the key only for specific victims. That reduces exposed secrets, reduces reuse, and limits what reverse engineers can pivot on.
For defenders, that pushes you toward behavioral questions:
- Why is this host initiating Telegram-related traffic when it never has before?
- Why is a finance workstation performing periodic encrypted uploads at odd hours?
- Why did a user who never runs portable executables suddenly spawn one from a document?
Those are exactly the questions modern AI in cybersecurity is good at answering—if you feed it the right telemetry.
Cryptographic C2 verification: when sinkholing stops working
One of the more unusual techniques described is RSA signature verification tied to a domain generation algorithm (DGA) list. The concept is straightforward: even if a defender predicts the next C2 domains and registers them (sinkholing), the malware won’t trust them unless the server proves authenticity with a valid signature.
That has two consequences:
- Infrastructure disruption becomes harder. Defensive domain grabs don’t automatically translate into visibility.
- Victim traffic becomes less useful for defenders. You may see DNS lookups, but you can’t easily coerce the implant into talking to you.
This is where AI-driven detection has to shift from “catch the payload” to “catch the shape of the operation.” DGA behavior, periodic failed connections, anomalous DNS entropy, and unusual process trees become the hunt surface.
The bigger national security lesson: resilience often comes from ecosystem support
The reporting also underscores a hard reality in national security cyber defense: some actors benefit from local ecosystem advantages that private-sector defenders can’t replicate.
One historic example tied to this group is that when researchers sinkholed infrastructure, national telecom-level intervention reportedly helped redirect traffic back toward the operator’s benefit. Whether you’re defending critical infrastructure or a defense supply chain, the implication is blunt:
If an adversary can influence routing, hosting, or telecom controls in their region, you should assume they can outlast simple takedowns.
That’s not an argument for despair. It’s an argument for building detection programs that don’t depend on the attacker making obvious mistakes.
Where AI-powered threat detection actually helps (and where it doesn’t)
AI isn’t a magic detector that “finds APTs.” It’s a force multiplier for teams that instrument their environments and know what they’re looking for. The win is speed and scale: correlating weak signals across endpoints, identities, email, and network activity.
1) AI for anomaly detection across identities and endpoints
APT campaigns targeting dissidents and high-risk individuals often involve account takeover attempts, credential theft, and long-term mailbox monitoring. An AI-driven user and entity behavior analytics (UEBA) program can flag:
- New geographies or impossible travel patterns
- Sudden changes in MFA behavior (push fatigue, repeated denies, new device enrollments)
- Abnormal mailbox rules (auto-forwarding, hidden rules, mass search/export patterns)
- Rare admin actions performed by non-admin identities
This is especially relevant in December: travel, holiday schedules, and staffing gaps create “cover” for anomalies. AI models that baseline per-user behavior (rather than global averages) are far better at spotting abuse during seasonal noise.
2) AI to detect malicious document delivery and execution chains
The report describes a tactic that should make every defender flinch: malware delivered as an executable inside an Excel file and not flagged by common scanning.
You don’t beat that with “one more signature.” You beat it by detecting execution relationships:
- Office application spawning a suspicious child process
- Unusual use of
rundll32,regsvr32, PowerShell, or WMI from document contexts - Newly dropped binaries executed from user-writable paths
Modern EDRs already collect much of this; AI helps prioritize what matters when you’re drowning in process events.
3) AI for DGA and covert C2 pattern recognition
DGA-based infrastructure and signature-verified C2 push defenders toward network-level and DNS analytics. AI models can flag:
- High-entropy domain names and unusual NXDOMAIN bursts
- Periodic beacon timing that doesn’t match installed software
- Rare destinations for a given host group (e.g., HR laptops suddenly behaving like dev workstations)
- Encrypted exfil patterns (size, timing, and frequency anomalies)
Where AI won’t help: if you have no DNS logging, no endpoint telemetry, and no identity audit trail, there’s nothing for models to learn from. AI needs data, governance, and a feedback loop.
Practical defense playbook: what to do this quarter
If you’re responsible for security in government-adjacent environments or organizations at risk of political targeting, treat this as a checklist moment.
Baseline controls (do these even without AI)
- Harden Office execution paths: block or heavily restrict child process spawning from Office apps where feasible
- Enforce phishing-resistant MFA for high-risk users (admins, executives, journalists, researchers)
- Centralize DNS logs and retain them long enough to support slow-burn investigations
- Isolate high-risk users (separate devices or browser profiles for activism/research vs corporate access)
AI-enabled upgrades that pay off fast
- Risk-based identity monitoring: dynamic risk scoring for logins, device enrollments, and mailbox changes
- Automated correlation rules: stitch together “weak signals” (odd DNS + Office spawn + new persistence) into one incident
- Behavioral allowlists: learn what is normal for specific teams (finance, HR, engineering) and alert on cross-role drift
- Triage automation: use AI copilots to summarize incident timelines and recommend containment steps so analysts move faster
A simple, high-signal detection hypothesis to start with
If you want one hunt that aligns with the tactics described, start here:
- Identify endpoints where Office applications spawn executables from user directories
- Correlate those endpoints with new outbound DNS patterns (high-entropy domains, repeated failures)
- Add identity context: did the user recently change MFA methods, reset passwords, or receive unusual email attachments?
This is the “AI advantage” in practice: not replacing analysts, but helping them connect events that are individually unremarkable.
People also ask: “How do you detect an APT that stays quiet?”
You detect quiet APT activity by focusing on behavioral deviations, not malware labels. The strongest signals typically show up as:
- Identity anomalies (logins, MFA, mailbox rules)
- Execution anomalies (process trees, persistence, LOLBins)
- Network anomalies (DNS patterns, beacon timing, rare destinations)
When you combine these signals, the actor’s stealth tactics become less effective. They can hide one artifact; they can’t easily hide the operational footprint across your environment.
What this means for AI in Defense & National Security
State-sponsored cyber espionage isn’t slowing down in 2025—it’s getting more selective and more durable. Prince of Persia is a useful case study because it shows what happens when a mature actor invests in staying power: cryptographic trust checks, selective C2 secrets, and patient victim triage.
If you’re trying to generate leads for a security program upgrade, the message to stakeholders is simple and defensible: the threat isn’t only the “big breach.” It’s the quiet access that sits in your environment while everyone’s watching the loud actors.
A practical next step is an assessment that maps your telemetry coverage (endpoint, identity, DNS, email) to AI-ready detection use cases, then prioritizes the two or three hunts you can operationalize in 30 days. Once you have repeatable hunts, automation and AI actually compound your advantage.
What would your team see first if a “dormant” APT started triaging users inside your environment tomorrow—an alert, or a blind spot?