A $63M raise signals AI fraud prevention is now core fintech infrastructure. Learn what to evaluate in AI risk platforms for payments and lending.

AI Fraud Prevention Gets $63M: What It Signals
A $63 million raise for an AI-powered fraud prevention and loan verification company isn’t just another fintech funding headline. It’s a signal that payments and lending risk teams are done “patching” fraud with rule tweaks and after-the-fact reviews. Investors are backing infrastructure that can keep up with real-time digital payments, synthetic identities, faster loan decisions, and growing regulatory pressure.
The source article (Finextra) was inaccessible at time of writing (blocked by a 403), but the headline alone—Informed.IQ raises $63m for AI-powered fraud prevention and loan verification—is enough to unpack what’s happening in the market. Because this funding pattern fits a very clear trend I keep seeing across the AI in Payments & Fintech Infrastructure landscape: money is flowing toward platforms that reduce loss and operational friction at the same time.
Why $63M for AI fraud prevention is a big infrastructure bet
Answer first: Funding at this level is validation that AI fraud prevention is now treated as core financial infrastructure, not a “nice-to-have” add-on.
Fraud used to be something you handled at the edges: a rules engine here, a manual review team there, a few velocity checks bolted onto onboarding. That approach breaks down when:
- Payment rails are instant or near-instant, leaving little time to intervene
- Fraud tactics evolve weekly, not quarterly
- Identity signals are messy, especially with synthetic IDs and stolen credentials
- Customer expectations are unforgiving (false declines cost real revenue)
A meaningful raise suggests buyers are consolidating vendors and preferring platforms that can cover multiple risk moments: onboarding, account changes, payment initiation, chargeback disputes, and loan underwriting. The industry is shifting from “fraud tools” to risk decisioning systems.
One-liner you can quote: Fraud prevention isn’t a product feature anymore—it’s part of the uptime of your revenue.
The 2025 context: fraud pressure is rising while budgets tighten
December 2025 is a weird moment for risk leaders. Fraud is up (especially identity-related attacks), but headcount is scrutinized. So the pitch that resonates is simple:
- Catch more bad actors
- Reduce manual reviews
- Approve more good customers
- Produce better audit trails
That last point matters more than many teams admit. Regulators and partner banks want explainability, governance, and documented controls, not just “the model said no.” Vendors that can package AI with operational controls will keep winning deals.
Fraud prevention and loan verification: why these two belong together
Answer first: Loan verification and fraud prevention are converging because most modern lending fraud is identity fraud, and identity fraud looks the same across lending and payments.
When a company focuses on both fraud prevention and loan verification, it’s pointing at the same core problem: trusting the customer is hard.
Here’s what “loan verification” typically includes in practice:
- Validating identity (KYC), device, and behavioral signals
- Verifying income and employment claims
- Checking bank account ownership and cashflow consistency
- Detecting document manipulation (pay stubs, statements)
- Screening for synthetic or manipulated identities
Now map that back to payments. The same identity and device signals that help you underwrite a loan also help you:
- Stop account takeover before a payout
- Prevent first-party fraud (“friendly fraud”) patterns
- Detect mule activity and suspicious beneficiary changes
- Reduce false positives by understanding normal behavior
Synthetic identity is the bridge threat
Synthetic identity fraud sits right between lending and payments. A bad actor can:
- Create a synthetic profile that passes basic checks
- Build “credit” or transaction history gradually
- Take credit (loan, BNPL, credit line)
- Move funds through payment channels
- Disappear when repayment is due
That’s why funding is going to vendors that can connect signals across the lifecycle, not just score one transaction.
What AI actually changes in risk teams (and what it doesn’t)
Answer first: AI improves fraud prevention by learning patterns humans can’t encode as rules, but it only works when it’s paired with clear decision logic, monitoring, and feedback loops.
A lot of teams buy “AI fraud detection” expecting magic. Most companies get this wrong. They treat the model as the control.
The model is not the control. The control is the system you build around it.
Where AI helps the most
AI tends to outperform rules when the problem has:
- High-dimensional signals (device, network, behavior, transaction context)
- Rapidly evolving attacker behavior
- Complex interactions (e.g., a pattern across merchants, cards, and devices)
- The need to reduce false declines while keeping losses flat
In payments, the best results usually come from real-time risk scoring combined with step-up actions:
- Approve
- Challenge (2FA / step-up verification)
- Hold for review
- Decline
In lending, AI supports income and identity verification at scale, especially where document and bank data can be inconsistent.
Where AI won’t save you
AI won’t fix:
- Broken data pipelines
- Missing labels (you don’t know what fraud looks like in your own systems)
- Slow dispute processes that delay ground truth
- Lack of operational capacity to respond to alerts
If your chargeback feedback arrives 60–90 days later, your model can still work—but your learning loop is slower, and your feature engineering needs to account for delayed outcomes.
A practical rule: if you can’t explain how a decision gets audited, you can’t put it into production at scale.
A practical blueprint: how to evaluate AI fraud prevention vendors
Answer first: Evaluate AI fraud prevention platforms on outcomes, integration depth, and governance—not on model claims.
If you’re a fintech, bank, PSP, or lender assessing an AI-powered fraud prevention and loan verification platform, use a checklist that reflects infrastructure reality.
1) Outcome metrics that matter (and can be proven)
Ask for performance framed as measurable business impact:
- Fraud loss reduction (basis points of volume or $ saved)
- False decline rate (and revenue recovered)
- Manual review rate (and time-to-decision)
- Approval rate lift for good customers
- Time-to-detect and time-to-contain for new fraud attacks
Be wary of vanity metrics like “model accuracy” without context. Fraud is imbalanced; accuracy can look great while missing the only cases you care about.
2) Coverage across the lifecycle
The platform should support multiple moments, not just checkout:
- Onboarding/KYC and account opening
- Login and session risk
- Payment initiation and payouts
- Account changes (email/phone/bank changes)
- Disputes, chargebacks, and collections signals
A vendor focusing on both fraud prevention and loan verification is often better positioned here—because they’re already building for multi-stage decisioning.
3) Data and integration reality
Great models fail with weak integrations. Validate:
- What data sources are supported (device, email/phone intelligence, bank data, bureau signals, internal history)
- Latency requirements (sub-second scoring for payments)
- SDK/API maturity and monitoring
- Backtesting support (can you replay last quarter and see impact?)
4) Governance, explainability, and controls
This is the part that wins enterprise deals in 2025.
Look for:
- Reason codes that map to policy
- Model monitoring (drift, stability, bias checks)
- Human-in-the-loop workflows (review queues, escalation)
- Audit logs for every decision and override
- Policy simulation tools (“If we change this threshold, what happens?”)
How AI fraud prevention improves payment security without killing conversion
Answer first: The best AI fraud prevention systems improve payment security by using progressive trust—friction only when risk increases.
Most teams still treat risk like a binary. Either you’re safe or you’re not. That’s why customers get annoying, repetitive verification prompts.
A better pattern is progressive trust:
- Low-risk returning customer → frictionless approval
- Medium-risk signal change (new device) → step-up authentication
- High-risk pattern (suspicious network, velocity, beneficiary change) → hold/decline
This matters because conversion is a security metric. If your fraud system blocks too many legitimate users, you’ll lose revenue and invite workarounds that create new risk.
One-liner you can quote: A false decline is fraud against your own business.
Example scenario: payout fraud vs. legitimate urgency
Consider a marketplace handling contractor payouts (a common December spike due to seasonal gig work and year-end cash needs):
- Fraudster takes over an account and changes the bank destination
- Legit user requests a payout from a known device and usual location
A rule like “bank change = block” will stop fraud—but it’ll also hurt real contractors who legitimately update banking at year-end. AI systems that combine behavioral history + device trust + network signals can reduce blanket blocks and target the actual threat.
People also ask: quick answers risk teams need
Is AI fraud detection better than rules?
Yes for pattern discovery and adaptation, but rules are still useful as explicit policy controls. The winning setup is hybrid: AI scores + rules for hard stops and compliance.
What’s the difference between fraud prevention and loan verification?
Fraud prevention focuses on blocking malicious behavior across transactions and accounts. Loan verification focuses on validating borrower identity, income, and legitimacy. In modern fintech, they overlap heavily because identity fraud drives both.
How do you measure ROI for AI-powered fraud prevention?
Use a before/after or holdout test that tracks: loss rate, false declines, manual review costs, and approval rate. If a vendor can’t support controlled testing, ROI claims are guesswork.
What this funding signals for AI in payments & fintech infrastructure
Answer first: The $63M raise signals the market is consolidating around AI systems that combine risk accuracy, operational speed, and auditability across payments and lending.
For risk leaders, the message is blunt: fraud is becoming more automated and more professional. Fighting it with static tools is expensive and demoralizing. For product leaders, the message is just as clear: payment security can’t come at the cost of customer experience, especially when competition is one tap away.
If you’re building or buying in this space, take a hard look at your risk stack. Where are decisions still manual? Where are you blind because data isn’t connected? And where are you using “AI” without the governance to defend it?
If you want to pressure-test your current fraud and verification flow, the fastest starting point is mapping every risk decision to three things: data used, action taken, and how it gets audited. You’ll find the gaps quickly.
The next year of AI in payments & fintech infrastructure will reward teams that treat risk as infrastructure—measurable, monitorable, and built to scale. What part of your fraud stack would break first if your volume doubled overnight?