DPRK-linked hackers stole $2.02B in 2025. Learn how AI-driven cybersecurity detects identity and transaction anomalies early—before funds disappear.

Stop Crypto Heists: AI vs. $2B DPRK Threats
North Korea–linked threat actors have stolen at least $2.02 billion in cryptocurrency in 2025, out of more than $3.4 billion stolen globally from January through early December. That’s not a niche “crypto problem.” It’s a loud signal that state-backed teams are treating digital assets—and the companies that touch them—as a repeatable revenue stream.
Most companies get this wrong: they respond as if these are isolated incidents that can be solved by a one-time hardening sprint or a new checklist. The pattern behind 2025’s numbers says the opposite. The attackers are industrialized, patient, and excellent at finding the one workflow your controls don’t cover.
This post is part of our AI in Cybersecurity series, and I’m going to take a stance: if you’re defending crypto infrastructure (or any business with crypto exposure), AI-driven cybersecurity isn’t a “nice-to-have.” It’s the practical way to keep up with the speed, scale, and deception of state-sponsored operations.
What the $2.02B figure really tells defenders
The main takeaway isn’t just “crypto theft is up.” The useful takeaway is why it’s up—and what that implies for your security model.
The RSS summary reports a 51% year-over-year increase in global cryptocurrency theft in 2025, with DPRK-linked actors responsible for roughly 59% of the total ($2.02B of $3.4B). When one cluster of actors dominates a global crime category, it usually means three things:
- Repeatable playbooks work. The same methods (with minor variations) keep succeeding against different organizations.
- Operational capacity is high. These groups can run parallel campaigns: social engineering, malware, laundering, and re-entry.
- Deterrence is low. If the ROI wasn’t there, the numbers wouldn’t look like this.
If you run an exchange, custodian, fintech, Web3 company, or even a traditional enterprise that holds crypto on treasury, your risk isn’t “getting hacked.” Your risk is being selected because your workflows are hackable—and because attackers can profit quickly.
The hidden exposure: not just wallets
Even if you don’t custody assets, you may still be in the blast radius:
- Payroll in stablecoins, vendor payments, or customer refunds
- Treasury positions (BTC/ETH) held for “optional upside”
- Partnerships with custodians, liquidity providers, or payment processors
- Developers with access to signing systems, CI/CD, or secrets managers
State-sponsored groups don’t need you to be a crypto-native company. They need you to be one step from something that is.
How state-sponsored crypto theft actually happens (and why controls miss it)
Most large crypto thefts aren’t “Hollywood hacks” where someone brute-forces a private key. They’re multi-stage operations that combine identity, endpoints, cloud, and transaction flows.
Here’s a realistic chain I’ve seen versions of across incident reports and postmortems:
1) Compromise the human layer first
State-aligned crews are disciplined about targeting people with access: engineers, finance operators, DevOps, support staff, even third-party contractors.
Common entry patterns:
- Highly tailored phishing that mimics internal tooling
- “Recruiter” outreach leading to malware-laced assessments
- Token/session hijacking from a managed endpoint
- MFA fatigue or helpdesk social engineering
Why classic defenses fail: secure email gateways and basic endpoint rules are tuned for volume malware, not bespoke lure content and long-game social engineering.
2) Move laterally to keys, signing, and admin planes
Once inside, the goal is rarely “steal data.” It’s to reach:
HSM/key management workflows- hot wallet infrastructure
- cloud IAM roles that can modify CI/CD, secrets, or policies
- build pipelines that can inject code into wallet services
Why classic defenses fail: many organizations monitor the production app heavily but under-monitor the systems that deploy the app and the identities that can change the rules.
3) Execute theft as a “valid” transaction
The most painful part: the final action often looks legitimate. A signed transaction is still signed, even if an attacker triggered it.
Why classic defenses fail: rule-based detection struggles with “authorized-but-malicious” behavior—especially if thresholds, allowlists, or approvals were quietly altered upstream.
4) Launder at machine speed
The laundering phase is optimized for time: split funds, hop chains, route through mixers/bridges, and cash out through networks of accounts.
Why classic defenses fail: by the time an analyst sees the alert, the funds have moved multiple times. Manual review can’t keep pace.
Where AI-driven cybersecurity earns its keep
AI doesn’t replace security fundamentals. It fixes the part that humans can’t do well: continuous correlation of weak signals across identities, endpoints, cloud, and transactions—fast enough to stop the second and third step.
Here are the most valuable AI applications for defending against state-sponsored crypto theft.
AI for anomaly detection across identities, endpoints, and cloud
The fastest wins come from behavioral baselining that treats identity and admin actions as first-class signals.
What AI-driven anomaly detection can flag earlier than rule sets:
- A developer account using admin APIs it never touched before
- An unusual sequence: new OAuth app → token grant → policy change → secret export
- “Normal” logins but with abnormal session characteristics (impossible travel isn’t enough)
- Subtle privilege escalation patterns across cloud IAM
A snippet-worthy truth: Most breaches don’t start with malware. They start with “weird” identity behavior that nobody connected in time.
Practical implementation checklist (not theory)
If you’re deploying AI in cybersecurity for cloud and identity, focus on:
- Identity graphing: users, service accounts, roles, tokens, OAuth apps
- Sequence detection: not just single alerts, but chains of events
- Entity risk scoring: per identity and per workload (not a single global score)
- High-signal telemetry: IAM changes, secrets access, signing requests, CI/CD actions
If you can’t explain why an alert fired, you won’t operationalize it. Prioritize models and systems that produce investigable output: “this identity deviated from baseline and touched the signing workflow within 12 minutes.”
AI for transaction monitoring that understands intent
For organizations with custody or transaction flows, you need AI-powered fraud detection that treats blockchain transactions like payment risk—because that’s what it is.
What to detect (beyond static thresholds):
- New destination addresses with no prior relationship to the organization
- Withdrawal patterns that match laundering behavior (splitting, chaining, time compression)
- Policy drift: sudden changes in approval behavior or withdrawal limits
- Cross-system mismatch: user support ticket + account changes + large withdrawal
This is where AI is strongest: correlating signals that live in different teams’ tools (support, IAM, wallet service, SIEM) and spotting when the combination is abnormal.
“But we already have limits and allowlists”
Limits help, but attackers adapt. They’ll:
- execute multiple smaller withdrawals within policy
- compromise the allowlisting process itself
- reroute to addresses that appear legitimate (e.g., via prior low-value activity)
AI doesn’t magically solve laundering. It gives you a fighting chance to interdict earlier, when the attacker is preparing the path.
Automating security operations without automating mistakes
Security teams lose crypto incidents in the handoff moments: alert triage, escalation, and waiting for approvals. AI helps most when it reduces time-to-action, not when it sprays more alerts.
Strong AI-driven security operations (AI SecOps) usually includes:
- Alert clustering: collapse 200 noisy events into 3 incidents
- Auto-enrichment: attach identity history, device posture, recent IAM changes, and transaction context
- Guided response: recommended containment steps based on playbooks
- Safe automation: pre-approved actions for high-confidence cases (session revoke, key rotation triggers, temporary withdrawal hold)
If your incident response depends on a human noticing a single alert at 2:00 a.m., you don’t have a plan—you have hope.
Guardrails that keep automation from backfiring
A good stance here: automate containment, not irreversible actions.
Examples of sane guardrails:
- Put time-bound holds on high-risk withdrawals instead of blocking permanently
- Require two-person approval for changes to signing policies, but let AI enforce the workflow
- Auto-isolate endpoints that access signing infrastructure when device posture changes
- Rotate secrets and revoke tokens automatically, but log everything for later review
A defensive blueprint for 2026 budgeting (what I’d fund first)
If the 2025 DPRK-linked theft numbers are your wake-up call, here’s a practical priority order that maps to real attack paths.
1) Lock down signing and key workflows
- Separate hot, warm, and cold paths with clear policies
- Require independent approvals for policy changes
- Monitor every signing request as a high-value event
2) Treat identity as your primary perimeter
- Centralize identity logs (cloud, SSO, endpoints, CI/CD)
- Enforce phishing-resistant MFA for privileged users
- Continuously score identity risk with behavioral analytics
3) Add AI-based detection where humans are slow
- Sequence-based detection for IAM + secrets + CI/CD
- Behavioral anomaly detection for privileged identities
- Transaction monitoring that correlates off-chain events
4) Build “stop the bleed” response paths
- Pre-authorized withdrawal holds for high-risk signals
- Fast token revocation and session kill capabilities
- Key rotation playbooks tested quarterly
5) Test against the real playbooks
Run purple-team exercises that simulate:
- recruiter-style malware on developer endpoints
- CI/CD tampering
- signing policy manipulation
- laundering-style withdrawal patterns
If your simulation stops at “we got a shell,” you’re not testing what matters.
People also ask: quick answers for leadership teams
Is AI in cybersecurity mandatory for crypto companies?
If you custody assets or control signing workflows, yes. Manual monitoring can’t keep up with state-sponsored teams operating across identities, cloud, endpoints, and transactions in parallel.
What’s the quickest AI win against state-sponsored threats?
Identity and access anomaly detection tied to privileged workflows (secrets access, CI/CD, signing). That’s where early weak signals show up.
Will AI reduce losses, or just create more alerts?
AI only reduces losses when paired with automation guardrails and response playbooks. If it isn’t connected to containment actions, it becomes another dashboard.
What to do next (before the next $2B headline)
The 2025 numbers—$2.02B tied to DPRK-linked actors and $3.4B stolen overall—should change how leaders think about cyber risk. This isn’t random cybercrime. It’s a sustained, state-backed business model.
If you’re responsible for security, risk, or infrastructure, the most pragmatic move is to adopt AI-driven cybersecurity where it matters most: identity behavior, privileged workflow monitoring, and transaction anomaly detection, backed by response automation that can act in minutes.
If your organization had to pause high-risk withdrawals for 30 minutes tonight based on identity anomalies, could you do it without chaos—and would you trust the trigger? That question is a decent test of whether you’re ready for 2026.