FinCEN tracked $4.5B in ransomware payments since 2013. Here’s what the spike means—and how AI security analytics can stop attacks earlier.

Ransomware Payments Hit $4.5B—AI Can Cut Them Down
$4.5 billion. That’s how much ransomware money the US Treasury’s Financial Crimes Enforcement Network (FinCEN) has tracked in reported ransom payments since 2013—and more than $2.1B of that landed in just three years (2022–2024). If you’re responsible for security, risk, or IT operations, that number isn’t just a headline. It’s a loud signal that the extortion economy is still very healthy.
Here’s what bothers me most: a large chunk of ransomware “success” isn’t about brilliant malware. It’s about operational speed and predictability—attackers repeat the same steps across thousands of targets, counting on defenders to be slower than them.
This post (part of our AI in Cybersecurity series) takes the Treasury’s ransomware payment data as the hook and gets practical: why the payments spiked, what that says about attacker operations, and how AI-driven security analytics can reduce ransomware risk by detecting the earliest signals—before the ransom note.
What the Treasury data actually tells us (and what it doesn’t)
FinCEN’s report is a financial lens on ransomware, not a full census of every incident. The dataset comes from Bank Secrecy Act (BSA) reporting, which means it primarily reflects what covered institutions observed and reported.
The numbers worth repeating
The report includes:
- 7,395 BSA reports tied to 4,194 ransomware incidents from 2022–2024
- More than $2.1B in reported ransomware payments during that window
- For 2013–2021, 3,075 reports totaling about $2.4B
- A combined $4.5B in ransomware payments tracked across 2013–2024
The standout year was 2023, with $1.1B in payments and a 77% increase over 2022.
The “reported payments” bias is real
This data is valuable because it’s grounded in financial reporting, but it’s still partial:
- Many incidents never get reported through these channels.
- Payment flows can be obscured through intermediaries.
- Some victims don’t pay (or pay in ways that are harder to trace).
So treat $4.5B as a measured floor, not a ceiling.
A useful way to read the report: it’s less about counting every ransomware incident and more about tracking how the business of ransomware behaves when money moves.
Why ransomware spiked: the attacker business model got efficient
The payments surge isn’t mysterious. Ransomware crews improved their playbook and scaled it.
RaaS turned ransomware into a production line
Ransomware-as-a-Service (RaaS) lowered the barrier to entry. Affiliates don’t need to build malware, run infrastructure, or negotiate like pros. They rent tooling, share profits, and follow a proven script.
That creates a pipeline that looks a lot like a sales funnel:
- Initial access (phishing, credential stuffing, stolen tokens, VPN/device exploits)
- Privilege escalation and discovery
- Data theft first (double extortion pressure)
- Rapid encryption and ransom negotiation
The report also aligns with the reality many incident responders have seen: attackers increasingly prioritize volume and speed over long, complex intrusions.
Targets broadened beyond the “big game” era
One of the most important shifts: ransomware operators have become comfortable monetizing mid-market and smaller organizations. Lower ransom demands can still produce great returns when you run enough attacks.
This matches the observed trend toward smaller ransom demands and shorter dwell times. If an attacker spends less time inside a network, they also tend to cause less “custom” damage—often translating into a lower demand, but more frequent attempts.
Crypto choices reveal what attackers value
The report notes most payments were in Bitcoin, with Monero a distant second (dozens of payments and a much smaller total). That matters because Bitcoin’s traceability is improving, and enforcement pressure is growing—yet Bitcoin still dominates because it remains broadly usable, liquid, and easy for victims to acquire quickly.
The hidden cost isn’t just the ransom—it’s the operational hangover
Ransom payment totals are the attention-grabber, but they’re not the full bill.
A ransomware incident usually stacks costs across:
- Downtime and lost revenue (days or weeks, not hours)
- IR and legal spend (outside counsel, forensics, negotiation support)
- Regulatory exposure (breach notification, sector rules)
- Insurance friction (coverage disputes, renewal spikes)
- Brand damage (customer churn and partner distrust)
- Security debt (rushed rebuilds and postponed hardening)
During December planning cycles, I see a pattern: teams budget for tools, but under-budget for the messy middle—identity cleanup, log standardization, asset inventory, and incident rehearsals. Ransomware feasts on that gap.
Where AI actually helps against ransomware (and where it doesn’t)
AI won’t fix poor fundamentals. If you don’t patch, don’t segment, and don’t have recoverable backups, you’re still exposed.
But AI is genuinely useful in ransomware defense because ransomware is signal-rich. Attackers leave patterns across identity, endpoints, network, and cloud—often hours before encryption.
1) AI-driven anomaly detection: catch the “setup” phase
Ransomware isn’t one event; it’s a chain of events. The goal is to interrupt the chain early.
AI can flag behaviors like:
- A service account suddenly authenticating from a new geography or host
- “Impossible travel” or unusual authentication velocity
- Unusual privilege changes (new admin roles, mass group membership edits)
- Atypical remote management tooling usage (new RMM agents, scripted PS remoting)
- New scheduled tasks or persistence mechanisms at odd hours
The practical win: AI can correlate weak signals that humans miss when alerts are noisy.
2) AI in the SOC: reduce time-to-triage and time-to-containment
Most companies get this wrong: they buy detection tools, then drown in alerts.
AI helps when it’s applied to work reduction:
- Alert clustering (group 50 related alerts into 1 incident)
- Auto-enrichment (asset criticality, user risk, known bad IOCs, recent changes)
- Suggested response steps (disable account, isolate endpoint, block hash/domain)
This is where security automation matters. If attackers are optimizing for speed, defenders need containment actions that don’t require a 2-hour meeting.
3) AI for phishing resistance and user-risk scoring
Ransomware still commonly starts with identity compromise.
AI can contribute by:
- Detecting brand impersonation and lookalike domains earlier
- Identifying unusual mailbox rules and suspicious OAuth consent grants
- Scoring user risk based on behavior (logins, token use, device posture)
Pair this with phishing-resistant MFA and conditional access policies and you remove a lot of easy wins from attackers.
4) AI-assisted threat hunting that focuses on ransomware TTPs
Threat hunting fails when it’s generic. Make it ransomware-specific.
A strong AI-supported hunting program routinely looks for:
- Lateral movement spikes (SMB, RDP, WinRM) between unusual endpoints
- Credential dumping indicators and LSASS access patterns
- Rapid discovery commands (
net,nltest,whoami,dsquery) across many hosts - Sudden backup/restore service disruptions
- Use of commodity tools tied to initial access brokers
AI can help you generate hypotheses and summarize timelines, but humans still need to decide what’s malicious versus weird-but-legit.
A practical “AI + fundamentals” ransomware defense blueprint
If you want fewer ransomware payments in 2026, the plan can’t be “buy a tool and hope.” It has to be an operating model.
Step 1: Start with your most ransomed surfaces
The Treasury-linked analysis points to heavily targeted sectors like financial services, manufacturing, and healthcare. Regardless of industry, most ransomware entry points cluster around:
- External perimeter devices (VPNs, edge appliances, remote access)
- Identity providers (SSO, OAuth apps, stale tokens)
- Endpoint management tools (RMM, scripts, software distribution)
Inventory and monitor those first.
Step 2: Define “ransomware-ready” telemetry (minimum viable logs)
AI is only as good as what it can see. A ransomware-ready baseline includes:
- Identity logs (IdP sign-ins, MFA events, conditional access decisions)
- Endpoint process execution and command-line telemetry
- DNS and proxy logs (or equivalent network egress visibility)
- Privilege and group change auditing
- Backup platform logs (deletions, job failures, repository access)
If your SIEM only gets firewall logs and a few Windows events, your AI layer will hallucinate confidence while missing the attack.
Step 3: Automate containment for the top 5 early indicators
Pick five high-confidence triggers and pre-authorize the response.
Examples many orgs can safely automate:
- Disable a user after confirmed impossible travel + risky sign-in
- Isolate an endpoint when ransomware-like encryption behavior is detected
- Revoke tokens after suspicious OAuth consent or impossible access patterns
- Block outbound to newly observed suspicious domains used in staging
- Quarantine a host after credential dumping indicators
This is how AI turns into reduced payments: faster containment means fewer completed extortion events.
Step 4: Make backups an attacker problem, not your problem
Backups are only “good” if they’re:
- Offline/immutable (attackers can’t delete them)
- Test-restored regularly
- Segmented from primary identity systems where possible
AI can help detect backup tampering (unusual deletion attempts, credential changes, access anomalies), but your architecture has to make tampering hard.
Step 5: Rehearse the decision you don’t want to make
If a ransomware event happens, the decision to pay is often made under exhaustion and uncertainty.
Run a tabletop that answers, in plain language:
- Who can authorize a shutdown?
- Who talks to insurers and counsel?
- What’s the threshold for public notification?
- What data is most sensitive, and where is it stored?
- What does “restore success” look like in the first 24 hours?
AI can accelerate investigation, but it can’t replace governance.
What to do next if you want fewer ransomware payments
FinCEN’s $4.5B figure is a scoreboard for the last decade of ransomware. The uncomfortable truth is that attackers don’t need to win often—they just need a steady stream of organizations that can’t detect early, can’t contain fast, and can’t restore cleanly.
The most effective AI in cybersecurity programs I’ve seen don’t treat AI like a magic layer. They use it to compress time: time to detect, time to understand, time to act. That’s the difference between a contained intrusion and a seven-figure ransom conversation.
If you’re planning for 2026, ask yourself one question that cuts through the noise: If an attacker got valid credentials tonight, would AI-driven detection and automation stop the path to encryption before morning?