Smishing is shifting to rewards points, tax refunds, and fake shops. Learn how AI detects SMS phishing and stops OTP-to-wallet fraud faster.

AI vs Smishing: Stop Holiday Text Phishing Fast
A smishing message that used to look like “Your package is delayed” now looks like “Claim 8,000 mobile rewards points,” “You have an unclaimed tax refund,” or a slick product page offering a too-cheap deal that “just needs one verification code to complete checkout.” Same trap. Better camouflage.
This December, researchers have been tracking large waves of newly registered domains tied to SMS phishing kits that impersonate mobile carriers and state tax agencies. The twist is what happens after the victim enters their card details: the site asks for a one-time code that’s actually being used to add the victim’s card to Apple Pay or Google Wallet on a device the criminals control. Once the card is in a mobile wallet, fraud can happen fast—and the victim often won’t connect the dots until days or weeks later.
For leaders responsible for security, fraud, or customer trust, this is the moment to be opinionated: manual defenses and “user vigilance” aren’t enough during holiday surge conditions. AI in cybersecurity is built for exactly this problem—high-volume, high-variation, socially engineered attacks that move faster than traditional rule sets.
Why smishers switched to points, taxes, and fake retailers
Smishing works when the message feels routine and the action feels small. “Redeem points” and “confirm tax refund” are effective because they’re emotionally clean: no obvious panic, no threats, just a quick reward.
The holiday effect: more packages, more urgency, less attention
The last weeks of the year reliably bring an uptick in delivery-related scams. People are shopping in a hurry, tracking shipments, and tolerating more SMS noise from retailers and carriers. That background noise is exactly what attackers want.
Here’s what changes operationally during the holidays:
- Higher SMS volume from legitimate brands (order updates, delivery windows, OTPs)
- More first-time purchases from unfamiliar merchants
- More ad-driven shopping from social platforms and search results
- Less scrutiny because people are multitasking and stressed
Attackers don’t need a new technique. They need a better pretext.
The new “verification code” lie is the real payload
The most consequential part of these campaigns isn’t the fake points page—it’s the step where the victim is prompted to enter an OTP.
That OTP often isn’t for “checkout verification.” It’s a bank-issued code confirming something much riskier, such as:
- enrolling a card into a mobile wallet
- authenticating a high-risk action
- approving a new device token
A single socially engineered OTP can convert stolen card data into a wallet credential that’s harder for consumers to reason about and, in some cases, harder for fraud teams to unwind quickly.
Snippet-worthy truth: When a smishing site asks for a one-time code, assume you’re authorizing a different action than the one on the screen.
How the modern smishing pipeline actually works (and why it’s hard to catch)
Smishing has become a production line. The pipeline looks like this:
- Register thousands of lookalike domains (often in bursts)
- Deliver messages via iMessage, RCS, or SMS with short, casual copy
- Serve mobile-only phishing pages to reduce discovery by desktop scanners
- Collect identity + card data under a legitimate-looking flow
- Trigger an OTP event by attempting wallet enrollment
- Capture the OTP and complete wallet provisioning on a criminal-controlled device
Why “safe browsing” often misses fake stores
Fake e-commerce storefronts are an especially nasty variation because they don’t need to blast links to everyone. They can:
- run ads on major platforms
- mimic normal shopping UX
- look like a small niche retailer
- stay online for months if they’re not widely reported
And critically, the truly malicious behavior may only load during checkout, which means broad web crawlers can miss the bad step.
Why this hits enterprises even when “it’s consumer fraud”
Smishing isn’t just a personal problem. It becomes an enterprise problem when:
- employees reuse phones for work apps and personal shopping
- corporate cards are used for procurement or travel
- the help desk is flooded with “Is this message legit?” tickets
- brand impersonation drives customer churn and complaint volume
If your organization runs a mobile app, a loyalty program, or sends OTPs, you’re part of the ecosystem attackers are abusing—whether you like it or not.
Where AI in cybersecurity helps (and where it doesn’t)
AI is strong at pattern recognition across messy, high-volume signals. Smishing is messy and high-volume by design.
AI can detect smishing earlier by correlating weak signals
Traditional defenses often look for a single strong indicator (known bad domain, known sender, exact string match). Smishers avoid those. AI models can correlate multiple weak indicators to make a confident call.
Examples of signals that become powerful in combination:
- Domain registration bursts (many similar names registered within hours)
- Homoglyph and brand-typo patterns in URLs and subdomains
- Message templating that swaps only the brand name and reward amount
- Mobile-only rendering behavior and conditional redirects
- Infrastructure reuse (shared hosting patterns, TLS fingerprint similarity)
- User interaction anomalies (sudden OTP submissions following link clicks)
This is where an AI-driven threat detection program beats static rules: it adapts to variation without needing a human to handcraft the next rule.
AI can reduce wallet-enrollment fraud with risk-based challenges
If you’re a bank, issuer, wallet provider, or merchant processor, the OTP step is your choke point.
AI-driven fraud detection can score wallet enrollment events using features like:
- device reputation and device attestation signals
- geolocation mismatch (customer in one state, enrollment device elsewhere)
- velocity checks (multiple enrollments, multiple cards, rapid retries)
- behavioral biometrics in app flows
- historical customer patterns (first-time wallet add vs normal behavior)
Then you can step up controls only when needed, such as requiring:
- in-app confirmation (not SMS)
- a second factor that includes transaction context
- a short cooling-off period for first-time wallet provisioning
I’ll take a clear stance here: any OTP message that doesn’t plainly state what action it authorizes is doing users a disservice. People can’t make good security decisions if we hide the decision.
Where AI won’t save you: bad messaging and bad UX
AI can flag and block a lot, but if your customer experience trains people to:
- click links in texts,
- treat OTPs as “normal,”
- rush through prompts,
…then attackers are working with the grain.
Security teams should partner with product teams to stop normalizing risky habits. That’s not an AI problem. That’s a design choice.
Practical steps: what to do this week (security, fraud, and IT)
Smishing spikes are seasonal. Your response shouldn’t be.
For security teams: build a smishing playbook that doesn’t depend on luck
A workable playbook is fast, boring, and measurable.
- Define reporting paths
- One internal channel for employees to submit screenshots and suspicious links
- Automate enrichment
- Domain age, registration patterns, hosting reputation, redirect chain capture
- Block at multiple layers
- secure web gateway, DNS filtering, mobile threat defense, EDR browser controls
- Close the loop
- when a campaign is confirmed, notify staff with the exact lure being used
For fraud and IAM teams: treat wallet provisioning like a high-risk event
If your controls treat “add to wallet” as routine, criminals will keep farming OTPs.
Actionable improvements that reduce fraud without wrecking UX:
- Add explicit context to OTP prompts (what action, what wallet, what device)
- Step-up auth for first-time wallet enrollments
- Velocity limits on wallet enrollment attempts
- Customer notifications that arrive instantly (push + email, not just SMS)
For employee awareness: one rule that actually sticks
Most awareness guidance is too broad. People forget it.
Give employees one sticky rule for December:
- Never share a one-time code to “get rewards,” “confirm shipping,” or “claim a refund.” OTPs are for logging in or authorizing account changes—full stop.
Add a second rule if you must:
- Don’t use text-message links to fix “account issues.” Open the app or type the site manually.
For consumers (and for your customer comms): safer shopping habits that scale
If your brand is being impersonated, your customers need simple checks that don’t require security expertise.
Recommend steps like:
- buy from known merchants when the price gap is extreme
- check store age and reputation before entering payment info
- use virtual cards or card controls when available
- enable real-time transaction alerts
- review statements weekly during peak shopping
“People also ask” answers you can reuse internally
Why are smishing attacks harder to detect than email phishing?
Because SMS has shorter content, fewer technical headers, and higher perceived legitimacy. Attackers also use mobile-only pages and fast domain churn.
Why do attackers want OTP codes if they already stole the card number?
Because many issuers require OTP verification to enroll a card into a mobile wallet. That turns raw card data into a reusable payment credential.
What’s the fastest way to reduce smishing impact in an organization?
Centralize reporting, automate triage, and use AI-assisted detection to identify campaign patterns early—before the same lure hits hundreds of phones.
What smart organizations do next
Smishing isn’t going away after the holidays. Attackers are testing lures now, learning what converts, and carrying the winners into Q1—especially “refund” themes as tax season ramps up.
In the AI in Cybersecurity series, a recurring theme is simple: defenders win when detection and response run at machine speed. Smishing campaigns are built for scale, so your countermeasures have to scale too.
If you’re responsible for security outcomes, pick one measurable improvement before year-end: reduce time-to-detection for smishing campaigns, or tighten controls around wallet enrollment and OTP abuse, or deploy AI-based anomaly detection for messaging and fraud events. Do one well. Then iterate.
What would change in your risk profile if smishing messages were blocked before your users ever saw them?