Ransomware payments jumped 77% in 2023 then fell in 2024. Here’s how AI fraud detection helps banks and fintechs stop extortion payments.

Ransomware Payments Fell in 2024—AI Explains Why
A 77% jump is a siren, not a statistic.
FinCEN reported that ransomware-related payments surged 77% in 2023, then declined in 2024. Even without the full article text (the source page is access-restricted), the headline trend is enough to draw a straight line to what many Australian banks and fintechs are seeing on the ground: ransomware is no longer “just” an IT incident. It’s a financial crime workflow—complete with payment rails, laundering steps, and operational signals that can be detected.
Here’s the stance I’ll take: the 2024 drop doesn’t mean attackers got bored—it means parts of the ecosystem got better at making ransomware harder to monetise. Better reporting, faster interdiction, tighter exchange controls, stronger bank monitoring, and yes—more mature AI fraud detection and AI in financial crime compliance.
This post sits in our AI in Finance and FinTech series, and it’s focused on one question that matters to risk, compliance, and product leaders: How do you use AI to spot, stop, and shrink ransomware payments before money leaves the building?
What the 2023 surge tells you about the payment ecosystem
Answer first: The 77% surge signals that ransomware crews optimised for cash-out—meaning payments became more frequent, larger, and operationally “repeatable” across victims and intermediaries.
Ransomware groups don’t win by encrypting files. They win by getting paid. That means the real battleground is the payment path:
- Funding source: corporate bank accounts, treasury operations, insurance reimbursements, emergency liquidity
- Conversion: fiat-to-crypto through exchanges, OTC brokers, or nested services
- Movement: hops across wallets, mixers, peel chains, cross-chain bridges
- Cash-out: exchanges, mules, prepaid instruments, or offshore conversion
The 2023 spike suggests adversaries improved one (or several) of these steps. For financial institutions, the practical implication is blunt: ransomware is measurable in transaction behaviour, not just threat intel.
Why finance teams should care (even if “IT owns security”)
Answer first: Ransomware creates the same downstream risk stack as other financial crimes—sanctions exposure, AML breaches, fraud losses, and reputational damage.
When a victim pays, it can trigger:
- Sanctions and counterparty risk (who ultimately receives the funds)
- AML/CTF obligations for monitoring, reporting, and escalation
- Authorised push payment patterns that look legitimate but aren’t
- Third-party risk via managed service providers and payment vendors
I’ve found that many organisations still treat ransomware response as a one-off emergency. Attackers treat it as a repeatable business process. Your controls should, too.
Why payments dropped in 2024 (and why that’s not a victory lap)
Answer first: The 2024 decline likely reflects improved disruption across payment rails and crypto off-ramps—plus better detection and reporting—rather than a permanent reduction in ransomware attempts.
A fall in payments can come from multiple forces working together:
- Victims paying less often (better backups, incident response playbooks, refusal policies, law enforcement guidance)
- Victims paying smaller amounts (negotiation maturity, more pressure on attackers, fewer “mega ransoms”)
- Payments getting interrupted (account holds, exchange interdiction, blocked beneficiaries)
- Attackers shifting monetisation (data theft-only extortion, insider-enabled fraud, or monetising access)
For Australian fintechs and banks, the key lesson is this: payment declines can hide attack volume. If criminals can’t cash out easily, they adapt. That often means faster fraud, more social engineering, and more complicated laundering, which can increase the operational burden on monitoring teams.
A seasonal angle: why December matters
Answer first: Year-end periods increase ransomware and fraud pressure because approvals slow down and business urgency rises.
It’s late December 2025. Holiday staffing gaps, end-of-year reporting deadlines, and procurement freezes create the perfect environment for rushed decisions—exactly what ransomware crews exploit. If your payment controls rely on “someone experienced will notice,” December is when that assumption breaks.
How AI helps detect ransomware payments before they leave the bank
Answer first: AI identifies ransomware payments by scoring behavioural anomalies—not by waiting for a known bad wallet address.
Traditional controls often depend on static signals:
- known indicators (wallet blacklists)
- simple thresholds (large transfers)
- rules (“new beneficiary + urgent transfer”)
Those still matter, but ransomware payments routinely evade them. What works better is AI for fraud detection and AML that learns normal behaviour and flags deviations.
The patterns AI is good at spotting
Answer first: The strongest signals combine payment behaviour, customer context, and conversion steps into a single risk story.
Common ransomware payment indicators in banking and fintech environments include:
- Sudden crypto exposure: a business with no history of digital asset activity initiating crypto purchases or transfers
- Unusual urgency: multiple approval attempts, out-of-hours activity, “must be today” narratives reflected in interaction logs
- Beneficiary anomalies: first-time payees, recently created payees, or payees with thin history
- Payment fragmentation: splitting a large amount into multiple smaller transfers to pass limits
- Round-trip behaviour: funds moved into exchange accounts then quickly out to external wallets
- Operational breadcrumbs: changes to device, IP, geolocation, or atypical session behaviour during payment setup
The AI advantage is correlation—joining signals that sit in different systems: core banking, digital channels, merchant systems, and case management.
Supervised vs. unsupervised models (what actually works in practice)
Answer first: Use supervised models where you have labels (confirmed cases) and unsupervised anomaly detection where you don’t—then combine them with human review.
- Supervised ML (classification) works when you have historical confirmed ransomware or extortion payment cases, including near-misses.
- Unsupervised models (anomaly detection, clustering) work when cases are rare and attacker tactics evolve.
- Graph analytics helps when you need to connect entities: accounts, devices, counterparties, wallets, and shared infrastructure.
A practical architecture many teams land on:
- Real-time anomaly score at payment initiation
- Entity graph risk updated continuously
- Rules as guardrails (hard blocks for obvious policy breaches)
- Case workflow with explainable signals for investigators
If the investigator can’t explain why a payment is risky in 30 seconds, the model will get ignored—or worse, turned off.
Snippet you can use internally: “Ransomware payments are detectable because they create a rushed, abnormal funding-and-conversion pattern—AI catches the pattern, not just the address.”
Controls that shrink ransomware payments (without freezing legitimate customers)
Answer first: The best controls add friction only where risk is high—AI tells you where to place that friction.
Stopping ransomware payments is a balancing act. Too much friction and you lose customers. Too little and you become the path of least resistance.
High-impact controls for banks and fintechs
Answer first: Combine AI risk scoring with targeted payment friction, crypto rail visibility, and rapid escalation paths.
Consider implementing:
- Risk-based holds on high-scoring payments (minutes matter—design for fast review)
- Step-up verification (out-of-band confirmation, manager approvals, call-backs to known contacts)
- Payee controls (cooling-off periods for new payees in business banking)
- Crypto exchange policy (customer risk tiers, limits, enhanced due diligence for business accounts)
- Incident-response handshake (a direct channel between fraud/AML ops and the customer’s security contact)
The “handshake” is underrated. When a customer is mid-incident, they often can’t articulate what’s happening. A well-designed script plus a specialist team can prevent an irreversible transfer.
What about false positives?
Answer first: Reduce false positives by using customer segmentation and outcome feedback loops—not by lowering thresholds blindly.
Three tactics that reliably cut noise:
- Segment baselines (a crypto-native fintech customer shouldn’t be scored like a suburban accounting firm)
- Use progressive friction (start with a warning, escalate to hold only when multiple signals stack)
- Close the loop (investigation outcomes retrain models and tune thresholds)
This is where many programs stall: they deploy models, but they don’t operationalise learning. AI only stays sharp if outcomes flow back into training and rule tuning.
What Australian financial institutions should do in Q1 2026
Answer first: Build an “extortion payment prevention” capability that sits across fraud, AML, and cyber—then measure it like a product.
If you’re responsible for risk, compliance, or payments, here’s a practical plan for the first quarter:
1) Map your ransomware payment paths
List the most likely ways customers could pay:
- domestic transfers to intermediaries
- international wires
- business debit cards
- crypto exchange funding
- third-party payment processors
Then document where you can observe signals and where you’re blind.
2) Put AI where it changes outcomes
Prioritise AI models at points of no return:
- payee creation
- first-time large transfers
- exchange funding spikes
- rapid movement from fiat to crypto
3) Create a measurable playbook
Define metrics that track real prevention, not just alerts:
- time-to-detect (TTD) from payment initiation
- time-to-intervene (TTI) until hold/verification
- % of high-risk payments successfully reviewed before settlement
- confirmed prevented loss ($)
- false positive rate by segment
4) Train for December, not for May
Run tabletop exercises that assume:
- weekend approvals
- executive pressure
- partial system outages
- a compromised inbox pushing payment instructions
Ransomware response that only works in business hours isn’t a response plan—it’s wishful thinking.
People also ask: quick answers for internal stakeholders
Is a ransomware payment considered financial crime? Yes. It’s a value transfer to criminals and can trigger AML/CTF obligations, sanctions risk, and reporting duties.
Can AI detect ransomware payments if criminals use new wallets? Yes—because AI can score behavioural anomalies and conversion patterns, not just known wallet indicators.
Does the 2024 drop mean ransomware is going away? No. A drop in payments often means monetisation got harder, not that attacks stopped.
Where this fits in the AI in Finance and FinTech series
The broader theme of this series is simple: AI works best in finance when it’s tied to a concrete decision—approve, decline, hold, verify, escalate. Ransomware payments are a perfect example because the decision window is short, the stakes are high, and the signals are multi-channel.
If FinCEN’s headline trend is right—77% up in 2023, down in 2024—the story isn’t “problem solved.” It’s that financial institutions can materially change criminal economics when detection and intervention mature.
If you’re building AI fraud detection or modernising AML monitoring in 2026, treat ransomware payments as a flagship use case. It forces the right disciplines: cross-team workflows, explainable risk scores, and real-time controls.
Where do you have the biggest blind spot today—payee creation, crypto funding, or out-of-hours approvals?