AI scams are rising fast. Learn how banks and fintechs can detect deepfakes, stop APP fraud, and build consumer trust with responsible AI.

AI Scams in Finance: How Banks Can Fight Back
Consumer anxiety about AI scams is rising for a simple reason: the scams are getting better, faster, and cheaper to run.
A few years ago, a “bank fraud” phone call often had obvious tells—awkward scripts, generic details, and clunky email follow-ups. Now the same scam can arrive as a perfect voicemail in your CFO’s voice, a video call that looks like a familiar colleague, or a real-time chat that answers security questions convincingly. The reality? AI hasn’t created fraud, but it’s industrialised it.
This post is part of our AI in Finance and FinTech series, where we track how AI changes risk, trust, and product design in financial services—especially for Australian banks and fintechs. Here’s the stance I’ll take: we shouldn’t respond to AI scams by telling customers to “be careful.” Financial institutions need to treat this as a systems problem and build systems that can absorb it.
Why consumers are more worried about AI-powered fraud
Consumers are more concerned because AI scams don’t “look like scams” anymore—they look like everyday banking.
Fraudsters are using generative AI to improve every step of the funnel: targeting, persuasion, and impersonation. That means higher conversion rates and more attempts, which creates a feedback loop: more victims → more money → more tooling.
The three AI scam types hitting finance hardest
1) Deepfake impersonation (voice and video) A scammer doesn’t need Hollywood-quality deepfakes to succeed. They just need “good enough” to get through a rushed moment: an urgent payment request, a password reset, a change of bank details, or a hurried identity check.
2) Hyper-personalised social engineering AI helps attackers write messages that match your tone, role, industry jargon, and even local context. If you’re in Australia, expect more convincing references to common banks, common payment rails, and familiar lingo.
3) Synthetic identities and document fraud Generative AI can create realistic ID images and supporting documents, while other models help “age” a profile with plausible digital footprints. This is a direct threat to onboarding, instant credit, and low-friction account opening.
Snippet you can share: AI scams succeed because they exploit speed and trust—two things modern fintech is designed to maximise.
Why the timing is brutal (December 2025)
Late December is peak season for scams: people travel, inboxes fill up, and response times slow down. On the business side, finance teams are closing out the year and processing vendor changes.
That combination—high transaction volume, distracted customers, and operational fatigue—means AI-driven social engineering has unusually good odds right now.
What “AI scams” look like inside a bank or fintech
Inside a financial institution, AI-enabled fraud isn’t one problem. It’s a cluster of failure points across identity, payments, and communications.
Here are patterns I’ve seen repeatedly across the industry.
Payment fraud: authorised push payment (APP) scams scale up
APP scams (where customers are tricked into sending money) are particularly hard because the customer authorises the transaction.
AI makes the persuasion step more effective:
- More believable urgency (“I’m on a call with the auditor right now…”)
- Better pretexting (“Your account was flagged; this is the verification transfer…”)
- More realistic conversation flow (bots that respond instantly, 24/7)
If your fraud strategy mostly focuses on unauthorised transactions, you’ll miss where losses are shifting.
Account takeover: the weakest link isn’t always the password
Fraudsters combine credential stuffing with AI-powered help:
- Real-time “agent” bots that walk victims through handing over OTPs
- Voice deepfakes to bypass call centre checks
- AI-written emails that trick staff into resetting access
The takeaway: the attacker’s advantage is coordination. They can run many parallel attempts cheaply, then double down on the ones that show signs of life.
Customer comms: brand impersonation becomes a product problem
When scammers can imitate your brand voice, your SMS style, and your support workflows, “fraud prevention” becomes partly a communications design challenge.
If customers can’t quickly distinguish real vs fake messages, you’ll see:
- Higher inbound call volume and longer hold times
- Lower digital engagement (people stop trusting in-app prompts)
- Worse NPS and churn after a single scary incident
How AI in finance can detect and stop AI scams
The best way to fight AI scams is to treat fraud as an adaptive system: monitor behaviour, verify intent, and reduce high-risk friction without punishing everyone.
Here are the practical controls that work, and why.
Behavioural analytics beats static rules
Rules like “block if amount > $X” are easy to evade. Behavioural analytics is harder because it looks at how the customer is acting, not just what they’re doing.
Effective AI-driven fraud detection models typically evaluate:
- Session behaviour (typing cadence, navigation path, time-to-complete)
- Device and network signals (new device, risky IP ranges, emulator use)
- Transaction context (new payee + urgent transfer + unusual hour)
- Relationship signals (first-time recipient, recent payee edits)
The point isn’t to “catch fraud with one magic signal.” It’s to stack weak signals into a strong decision.
Real-time intent checks reduce APP scam losses
For APP scams, the key is interrupting persuasion at the right moment.
What works:
- Just-in-time friction for new payees and high-risk transfers
- Plain-language warnings that match the scam pattern detected
- Step-up verification that can’t be socially engineered easily
Examples of step-up verification that’s tougher to manipulate:
- In-app biometric confirmation (not via phone call)
- Out-of-band prompts inside the banking app (not SMS)
- Confirmation that requires reading and responding to a specific, on-screen statement
If you’re relying on SMS OTP as your “strong auth,” you’re betting against social engineering. That’s a bad bet.
Deepfake-aware call centre workflows
If voice impersonation is on your threat radar, your call centre script needs an update.
Strong practice looks like:
- Treating voice as non-secret (voice can be cloned)
- Using “challenge-response” questions that aren’t public or guessable
- Triggering step-up verification in-app when a caller requests sensitive actions
Snippet you can share: If a process can be completed entirely over voice, it can be completed by a deepfake.
Use AI to defend, but make it explainable to ops teams
Fraud teams don’t need a black box; they need an actionable reason to intervene.
A useful alert isn’t “model score 0.93.” It’s:
- “New payee added 3 minutes before transfer”
- “Device changed + abnormal navigation + unusual amount”
- “Customer previously never paid this category of merchant”
Explainability isn’t a PR feature. It’s how you reduce false positives and keep operations efficient.
Responsible AI: how to build trust while increasing security
Consumers aren’t just scared of scams; they’re also uneasy about how financial institutions use AI. Banks and fintechs need both: stronger fraud controls and clearer guardrails.
Transparency that actually helps customers
Most disclosures are written for legal cover, not comprehension. Customers need two clear messages:
- What you’re protecting (payments, identity, account access)
- How they’ll experience it (when they’ll see extra checks, what genuine messages look like)
A practical approach is to publish a simple “How we contact you” policy inside the app and repeat it during high-risk flows.
Data minimisation and model governance aren’t optional
AI fraud detection requires data. That doesn’t justify collecting everything.
What strong governance looks like:
- Clear retention windows for sensitive signals
- Access controls and audit trails for model features
- Regular bias testing (false positives that disproportionately hit certain groups)
- Model monitoring for drift (holiday season behaviour changes are real)
This is where banks and fintechs can separate themselves: the winner won’t be the company with the most AI—it’ll be the one that customers trust.
Build “secure by design” customer journeys
If your product UX encourages speed at all costs, scammers will exploit it.
Design patterns that reduce scam success without wrecking conversion:
- Delay or “cooling-off” options for first-time large transfers
- Name/descriptor verification for new payees (where rails support it)
- Clearer payee management history (who changed what, when)
- Easy reporting: one tap to flag a suspicious message or call
What to do next: a 30-day plan for banks and fintechs
If you’re responsible for risk, fraud, product, or compliance, you don’t need a two-year transformation to start.
Week 1–2: Find where AI scams already show up
- Review top fraud loss categories and tag APP vs unauthorised
- Pull 20 recent scam cases and map the customer journey
- Identify top 3 “high-risk moments” (new payee, password reset, limit change)
Week 2–3: Add friction only where risk is high
- Implement step-up checks for new payees + high-value transfers
- Replace SMS-only OTP for sensitive actions with in-app prompts
- Update call centre procedures for deepfake resistance
Week 3–4: Improve customer trust signals
- Standardise outbound comms templates and educate customers in-app
- Add a “Verified communications” hub (recent official messages)
- Train frontline staff on AI scam patterns and escalation paths
A decent outcome after 30 days isn’t “fraud is solved.” It’s fewer successful scams, faster detection, and a clearer customer story about safety.
The real issue: trust is now a competitive advantage
AI scams are pushing consumers to question every notification, every call, every payment prompt. If your customers hesitate to use your app because they’re afraid of being tricked, that’s not just a fraud problem—it’s a growth problem.
Financial institutions can respond with the usual mix of warnings and fine print, but it won’t be enough. The better approach is practical: use AI in finance for fraud detection, invest in verification that resists social engineering, and communicate in ways customers can actually follow.
If scammers are using AI to scale trust abuse, banks and fintechs need to scale trust protection. What’s your organisation doing right now to make “verify it in-app” the default instinct for customers and staff?