ISO phishing is back—now delivering Phantom Stealer. Learn how AI-driven email and behavior analytics can catch it early and reduce fraud risk.

Stop ISO Phishing: AI Tactics vs. Phantom Stealer
December is when finance teams are most overloaded: invoices to close, year-end reconciliations, bonus spreadsheets, vendor changes, and a steady stream of “please confirm” emails. Attackers love that timing because busy people click fast.
That’s exactly why the Phantom Stealer campaign (delivered through malicious ISO attachments inside ZIP files) is such a practical case study for this AI in Cybersecurity series. It’s not “advanced” in the movie-plot sense. It’s advanced in the way that matters: it blends into normal finance workflows, slips past basic controls, and steals the data criminals can monetize quickly.
Here’s the stance I’ll take: If you’re still treating email security as a signature-and-blocklist problem, you’re already behind. ISO phishing campaigns are built to look ordinary on the surface and only reveal themselves when you connect weak signals across email, endpoint, identity, and user behavior—exactly where AI-driven threat detection performs best.
What Phantom Stealer ISO phishing gets right (from the attacker’s view)
Attackers succeed when they align with your internal processes. The Phantom Stealer operation reported by Seqrite Labs (Operation MoneyMount-ISO) uses a “payment confirmation” lure aimed at finance and accounting, with procurement, legal, and payroll as secondary targets. That target selection isn’t random. Those teams:
- routinely open attachments from unfamiliar senders (vendors, banks, counterparties)
- are measured on throughput and responsiveness
- have direct access to sensitive systems and data
The ISO-in-ZIP chain is a bypass pattern, not a gimmick
The most important detail is the delivery format: a ZIP that contains an ISO. When the user opens the ISO, Windows mounts it like a virtual disc. From the user’s perspective, it feels like “opening a document.” From the attacker’s perspective, it’s a chance to package executables in a way that bypasses the mental model of “executables are dangerous.”
In this campaign, the mounted ISO includes a malicious execution path that loads Phantom Stealer via an embedded DLL (reported as CreativeAI.dll). That multi-stage design is doing two jobs:
- Reducing obvious indicators (no direct
.exeattachment) - Increasing survivability against simplistic attachment-based filtering
Phantom Stealer’s capability set maps to real financial loss
Information stealers are popular because they monetize fast. Phantom Stealer’s observed behaviors include theft of:
- browser passwords, cookies, and stored payment data
- cryptocurrency wallet extensions in Chromium-based browsers
- desktop wallet app data
- Discord authentication tokens
- files of interest
It also monitors clipboard content and logs keystrokes—two methods that are particularly dangerous in finance environments where staff copy/paste account numbers, payment references, and one-time codes.
Data exfiltration via Telegram bots and Discord webhooks is another choice that fits the current threat economy: it’s convenient for criminals, blends into normal internet traffic, and can be spun up quickly.
Snippet-worthy truth: Phishing isn’t “an email problem.” It’s a workflow problem—email is just the entry point.
Why traditional defenses miss ISO phishing (and why finance teams feel it first)
ISO phishing lands in an awkward gap between “email security” and “endpoint security.” Too many orgs still run those as separate stacks with separate teams and separate alert queues.
Here’s where conventional controls commonly fail:
1) Format-based detection doesn’t generalize
Blocking macros in Office documents helped—until attackers moved to LNK, HTML smuggling, container files (ISO/IMG), and multi-layer archives. If your policy is basically “block yesterday’s file type,” you’re playing whack-a-mole.
2) Sandbox detonation is less reliable than leaders assume
Phantom Stealer reportedly checks for virtualized/sandbox environments and can abort execution when it suspects analysis. That means “we detonate attachments” is not the safety net people think it is.
3) Finance email is inherently noisy
AP and procurement mailboxes contain:
- urgent requests
- unfamiliar domains
- attachment-heavy threads
- multilingual vendor communications
That’s a perfect habitat for social engineering, because legitimate anomalies are constant. Static rules generate either too many false positives (teams ignore alerts) or too many exceptions (attackers slip through).
4) The blast radius is bigger than email
Once a stealer lands, it targets browsers, tokens, local files, and credentials that open doors across systems. That’s how an “email incident” turns into:
- ERP compromise
- vendor payment diversion (BEC-style fraud)
- account takeover
- lateral movement via injected processes
How AI-driven email security stops Phantom-style attacks earlier
AI works here for one reason: it connects weak signals into a high-confidence decision. You don’t need one perfect indicator. You need 10 “slightly off” signals that, together, spell trouble.
Below are AI use cases that directly map to this Phantom Stealer pattern.
Behavioral analysis on sender + conversation context
A payment confirmation lure is easy to write. What’s harder to fake is relationship history.
AI models can score messages based on:
- sender reputation and sender “normalness” for that recipient/team
- whether the sender has appeared in prior threads with similar intent
- changes in communication style (tone, formatting, signature patterns)
- unusual send times for that vendor or department
This is where AI beats allowlists. An allowlist can’t tell you “this vendor’s account looks hijacked.” A behavioral model can.
Attachment and container intelligence (ISO-in-ZIP as a risk stack)
The attachment isn’t just “a file.” It’s a structure.
AI-assisted inspection can treat multi-layer attachments as features:
- ZIP contains ISO (rare for legitimate finance workflows)
- ISO contains executable/DLL loading patterns
- file naming matches “payment confirmation” templates used in campaigns
- entropy and packing characteristics typical of malware staging
Even without “knowing Phantom,” the model can flag the delivery recipe as abnormal.
Click-time controls powered by real-time risk scoring
Most orgs still rely on “delivery-time” email filtering. That’s not enough. Finance teams open older messages, forward threads, and revisit attachments days later.
A practical AI pattern is continuous scoring:
- score at delivery
- rescore at click/open (with new intel)
- rescore when the attachment is transferred to endpoint
This reduces the window attackers exploit when detections lag behind campaign changes.
Endpoint AI: stopping execution and exfiltration, not just email
If an ISO gets opened anyway, AI on the endpoint can still prevent the real damage by focusing on behaviors:
- suspicious DLL loading chains from mounted media
- keylogging and clipboard monitoring behaviors
- credential store access patterns that don’t fit user activity
- outbound connections to unusual endpoints and webhook-like traffic
Practical stance: Your goal isn’t perfect phishing prevention. It’s preventing credential theft and exfiltration when prevention fails.
A playbook finance and security teams can actually run next week
Policies that look great in a slide deck often collapse under real AP workload. This playbook aims for controls that are strict where they should be strict, and automated where humans shouldn’t be deciding.
Step 1: Treat ISO/IMG/VHD attachments as “restricted by default”
If your organization genuinely needs ISO files, route them through an approved channel (ticketing portal, secure file transfer, internal repository). For email:
- quarantine ISO/IMG/VHD files, including those inside archives
- block mounting/execution from user-writable locations
- require admin approval for exceptions (with logging)
Step 2: Add “finance workflow verification” for payment-related language
Security teams often miss a simple win: special handling for payment intent.
Use AI-based phishing detection or NLP rules to flag phrases like:
- “confirm transfer”
- “payment advice”
- “invoice correction”
- “bank details update”
Then route to a lightweight verification step:
- require vendor callback using a known number
- require second-person review for bank detail changes
- confirm via ERP/vendor portal rather than email reply
Step 3: Monitor for stealer outcomes (tokens, browsers, webhooks)
Stealers don’t behave like ransomware. They behave like “quiet theft.” So build detections around likely outcomes:
- abnormal access to browser credential stores
- creation/use of new browser extensions at scale
- suspicious outbound traffic patterns consistent with webhooks
- unexpected FTP connections from finance endpoints
Step 4: Shorten response time with AI-assisted triage
When an employee reports “I clicked it,” the clock matters.
An AI-supported SOC workflow should auto-collect:
- the full email object (headers, routing, authentication results)
- attachment lineage (ZIP → ISO → executed binaries)
- endpoint timeline (mount event, process tree, network calls)
- identity events (new sessions, MFA prompts, token refresh anomalies)
Then it should propose actions (isolate host, revoke sessions, rotate credentials, block indicators) with clear confidence scoring.
Step 5: Train for the real failure mode: “I thought it was normal”
Most phishing training over-focuses on typos and “suspicious links.” ISO phishing is different. The user may see:
- a believable subject line
- a “document-like” attachment flow
- a mounted drive that looks official
Update training scripts for finance teams to include:
- “Why would a vendor send an ISO for a payment confirmation?”
- “If it mounts a drive, stop and report.”
- “Payment confirmation should be verified in-system, not via attachment.”
Where this fits in the bigger AI-in-Cybersecurity story
This Phantom Stealer campaign is one more proof that defenders need to detect patterns rather than chase samples. The same reporting also highlights other Russia-focused phishing activity using LNK files, PowerShell downloaders, and open-source C2 frameworks. The delivery mechanism shifts; the operational intent stays consistent.
AI in cybersecurity is at its best when it does three things reliably:
- Finds anomalies humans won’t notice across email, endpoint, and identity
- Reduces alert fatigue by ranking risk with context
- Automates containment fast enough to beat credential theft and exfiltration
If you’re evaluating AI-powered email security or broader AI-driven threat detection, use a Phantom-style scenario as your test case. Ask vendors (and your internal team) to walk through exactly what would happen at delivery time, click time, execution time, and exfiltration time. The gaps become obvious fast.
Most companies get this wrong by over-investing in perimeter filtering and under-investing in behavioral detection + response speed. Fix that, and ISO phishing becomes a manageable problem instead of a recurring fire drill.
Next step if you want to pressure-test your current defenses: run an internal tabletop exercise for “payment confirmation ISO attachment,” include finance leadership, and measure how long it takes to (1) detect, (2) isolate, (3) revoke tokens, and (4) confirm no fraudulent payments were initiated. Whatever that time is today—cut it in half.
And if you’re betting on 2026 being calmer for phishing, don’t. What changes is the packaging. The social engineering stays.