AI security can stop crypto theft by scoring identity and transaction risk in real time. Learn the practical controls behind DPRK’s $2B playbook.
AI Defense Lessons From DPRK’s $2B Crypto Theft
$2.02 billion. That’s the estimated amount North Korea–linked threat actors stole in cryptocurrency during 2025—about 76% of all service compromises tied to crypto theft this year, and part of a $3.4+ billion global total through early December. One exchange breach (Bybit) accounted for $1.5 billion of that figure.
If you run security for a financial platform, a Web3 company, or any enterprise with sensitive access pathways, this isn’t “just a crypto story.” It’s a loud reminder that state-sponsored teams are treating revenue-generating cybercrime like an operating model: infiltrate access, move fast, launder methodically, repeat.
This post is part of our AI in Cybersecurity series, and I’m going to take a clear stance: most organizations are still defending against 2025 threats with 2018 assumptions. The gap isn’t only tooling—it’s speed, correlation, and decisioning. AI helps because it’s built for those three things when deployed correctly.
What DPRK crypto theft in 2025 tells us about the threat model
Answer first: The 2025 numbers show that DPRK-linked groups aren’t “hacking harder”; they’re scaling repeatable access patterns—service compromise, insider-style infiltration, and predictable laundering workflows.
Chainalysis reported a 51% year-over-year increase in DPRK-linked crypto theft value, from $1.3B in 2024 to $2.02B in 2025, bringing the lower-bound cumulative estimate to $6.75B stolen over time. That scale only happens when a group has:
- Reliable initial access methods
- Strong operational security (OPSEC)
- A laundering pipeline that works under pressure
- The discipline to industrialize what works
Pattern 1: Big wins come from service compromise, not clever malware
The record share of service compromises matters. It implies the attackers are targeting what actually moves money: exchanges, custodians, wallet tooling, signing workflows, privileged admin paths, and the human systems around them.
In plain terms: they don’t need to beat your endpoint controls if they can beat your business process controls.
Pattern 2: “People hacking” is still the fastest route in
North Korea–linked operations have long used social engineering campaigns (like fake job recruiting) to drop malware and harvest credentials. In 2025, reporting also highlights “Wagemole” style tactics—placing fraudulent IT workers inside companies or recruiting intermediaries to scale access.
That’s a brutal reality for defenders: you can have strong perimeter security and still lose if your hiring, vendor onboarding, identity verification, and access governance are loose.
Pattern 3: Laundering is structured—and therefore detectable
A key detail in the source reporting is the multi-wave laundering pathway over roughly 45 days:
- Wave 1 (Days 0–5): Immediate layering via DeFi and mixing services
- Wave 2 (Days 6–10): Initial integration via exchanges, secondary mixers, cross-chain bridges
- Wave 3 (Days 20–45): Final integration to convert into fiat or other assets
Defenders often treat laundering as “after the fact.” I disagree. Laundering patterns create detection opportunities—especially for AI models trained on behavioral sequences rather than single alerts.
Could AI have stopped the $2B theft? Yes—if it’s aimed correctly
Answer first: AI can prevent or limit theft when it’s applied to identity risk, transaction anomalies, and security operations automation—not when it’s used as a thin layer over noisy alerts.
AI isn’t magic. It’s pattern recognition plus decision support at machine speed. For state-sponsored actors, speed is the whole game: get in, escalate, trigger high-value actions (withdrawals, key use, policy change), then disappear into obfuscation.
Here are three places where AI consistently earns its keep.
1) AI for identity-centric defense (where most heists really begin)
Crypto theft at this scale typically requires access to something privileged:
- Admin consoles
- Signing infrastructure
- Wallet policy configuration
- Deployment pipelines
- Customer support tooling that can bypass controls
AI helps by scoring identity behavior in context:
- Impossible travel and device drift: “Same user, new device fingerprint, new ASN, new geo, new browser automation signals.”
- Privilege escalation sequences: “Read-only role → new token created → role modified → new API key → withdrawal policy changed.”
- Unusual access timing: Off-hours admin actions correlated with high-value workflows.
What works in practice is entity behavior analytics that understands relationships between accounts, devices, roles, and actions. The goal isn’t to flag “suspicious logins.” The goal is to flag suspicious paths to power.
Snippet-worthy line: If you can model how legitimate admins usually reach a sensitive action, you can catch attackers who take the shortest route.
2) AI for fraud and transaction anomaly detection (the money-moving layer)
Most teams separate “security” and “fraud.” That split is convenient for org charts, but attackers don’t care. High-impact theft is both.
AI-based anomaly detection is strong when you define the right features:
- Withdrawal graph anomalies: new destination clusters, new hop patterns, sudden fan-out.
- Velocity + novelty: “Large amount” isn’t enough—look for large + new address + unusual time + new device + policy change.
- Bridge/mixer proximity: detection based on behavioral similarity to known laundering waves, not just static deny-lists.
A practical stance: you should treat every high-value transfer as a scored event where security signals (identity, device, admin changes) and financial signals (amount, destination, velocity) meet in one risk engine.
3) AI to compress incident response time (because minutes matter)
A $1.5B heist doesn’t happen slowly. Even when laundering stretches for weeks, the critical compromise window is short.
AI can reduce response time by:
- Alert clustering: merging low-level signals into one coherent incident narrative
- Automated triage: “These 7 alerts are the same campaign; this one is the pivot”
- Suggested containment actions: step-up auth, revoke tokens, freeze withdrawals, rotate keys
The win isn’t “fewer alerts.” The win is faster, higher-confidence decisions.
The overlooked risk: IT worker infiltration is an access-control failure
Answer first: The “Wagemole” style infiltration model is dangerous because it bypasses malware detection and goes straight to legitimate credentials and trusted remote work patterns.
The reporting includes a real-world example of a U.S.-based participant facilitating North Korean nationals by sharing access to secured roles. That tactic generalizes well beyond government agencies or crypto exchanges.
Here’s the uncomfortable part: once an attacker operates through a “valid” employee identity, many defenses stop firing. That’s why AI needs to focus on behavioral inconsistency, not signature-based assumptions.
What to change (and where AI fits)
If you’re serious about stopping infiltration-driven compromise, prioritize these controls:
-
Stronger identity proofing for hires and contractors
- Cross-checks for synthetic identities and reuse patterns
- AI-assisted document and liveness verification (with human review for edge cases)
-
Just-in-time privileges + time-boxed access
- No standing admin access for roles that can move funds or alter signing policies
- AI risk scoring to approve/deny privilege grants in real time
-
Device and remote tooling governance
- Block or tightly control remote access tools unless explicitly approved
- Use AI to detect remote session anomalies (new host, odd latency patterns, automation)
-
Continuous access evaluation
- “Logged in” shouldn’t mean “trusted for the next 8 hours”
- AI-based risk should continuously adapt based on actions taken
A practical AI playbook for exchanges, fintech, and Web3 firms
Answer first: The best AI security programs start by protecting “money-moving workflows” with risk scoring, then expand outward to identity, cloud, and developer supply chain.
If you want a straightforward implementation plan that drives leads and results (not slideware), here’s what I’ve found works.
Step 1: Map your “blast radius” workflows
List the workflows that can cause irrecoverable damage:
- Withdrawal approval and signing
- Key management and rotation
- Policy changes (limits, whitelists, allowlists)
- Admin creation and role modification
- Treasury transfers and bridge usage
Then define what “normal” looks like for each workflow (who, what, when, where, how).
Step 2: Build a unified risk score (identity + transaction + environment)
A useful AI risk score combines:
- Identity signals: role, privilege changes, auth strength, device trust
- Transaction signals: amount, destination novelty, velocity, graph anomalies
- Environment signals: cloud changes, CI/CD events, wallet infra health
This is where many teams fail: they keep these scores in different systems and expect humans to correlate them. That’s exactly what attackers exploit.
Step 3: Automate the “safe brakes”
Your containment actions should be safe, reversible, and fast:
- Step-up authentication for risky admin actions
- Temporary withdrawal holds for anomalous transfers
- Token revocation and session invalidation
- Emergency rotation for keys and signing policies
Make AI the trigger, but keep humans in the loop for irreversible moves.
Step 4: Train on sequences, not single events
The laundering waves described earlier are a perfect example of why sequence modeling matters.
Single indicators fail because:
- Mixers and bridges change
- Infrastructure rotates
- Addresses churn
Sequence signals hold up because behaviors repeat: rapid dispersal, cross-chain hops, timed integration steps.
“People also ask” (and the answers you can use internally)
Is AI security effective against state-sponsored hackers?
Yes—when it’s focused on behavior and decision speed, especially around identity and high-impact workflows. It’s less effective as a cosmetic add-on to a legacy SIEM workflow.
What’s the biggest mistake companies make after a crypto theft?
They over-invest in perimeter fixes and under-invest in privileged access controls and transaction risk controls. The next breach hits the same high-value workflows.
Do non-crypto companies need to care?
Absolutely. The same playbook—credential theft, insider-style access, process abuse—hits SaaS, manufacturing, defense, and healthcare. Crypto just shows the damage in a clean number.
Where this goes next for AI in cybersecurity
DPRK-linked theft at this scale is a stress test for every assumption we have about trust: trusted employees, trusted devices, trusted workflows, trusted platforms.
AI doesn’t replace fundamentals like segmentation, least privilege, secure key management, and incident readiness. But it does one thing humans can’t do reliably at scale: connect weak signals into a high-confidence story fast enough to stop the money.
If you’re planning your 2026 security roadmap right now, make one decision that’s hard to walk back: treat identity and transaction risk as a single, AI-scored control plane for your most sensitive workflows. The attackers already operate that way.
What would change in your environment if every privileged action and high-value transfer had to “pass” an AI risk check in real time?