LANDFALL shows how a single image can deliver Android spyware. Learn how AI-driven threat detection spots zero-day behavior and automates response.

AI Detection Lessons From LANDFALL Android Spyware
Most companies still treat mobile security like a patch-management problem: “If devices are updated, we’re good.” LANDFALL is a clean counterexample.
Unit 42 uncovered LANDFALL, a commercial-grade Android spyware family delivered through malformed DNG image files that exploited a Samsung zero-day, CVE-2025-21042. The campaign was active months before the patch (mid-2024 through early 2025), likely delivered via WhatsApp-looking images, and built for deep surveillance: microphone recording, location tracking, contacts, call logs, photos—the full kit.
Even better (or worse, depending on your job): this wasn’t a noisy commodity trojan. It looks like a focused, private-sector offensive actor (PSOA) operation aimed at targets in parts of the Middle East. That’s exactly the sort of intrusion where AI in cybersecurity earns its keep—because signatures arrive late, and “just patch” doesn’t help when exploitation happens before the CVE is public.
LANDFALL in one sentence: a photo that installs spyware
Answer first: LANDFALL turned an everyday workflow—receiving an image—into an infection path by abusing a vulnerability in Samsung’s image-processing library.
The delivery mechanism matters because it’s a pattern: attackers keep aiming at media parsing (images, video, audio) to get code execution during “normal” handling by apps and OS components.
Here’s the simplified LANDFALL chain described by Unit 42:
- A target receives a malformed DNG (Digital Negative) file that appears like a typical WhatsApp image.
- The image triggers exploitation of CVE-2025-21042 in Samsung’s
libimagecodec.quram.so. - The file contains an appended embedded ZIP archive that drops spyware components.
- A first-stage loader (
b.so, referred to in strings as “Bridge Head”) executes and reaches out to command-and-control (C2). - A helper module (
l.so) can manipulate SELinux policy, enabling elevated access and persistence.
Two details are especially relevant for defenders:
- The malware lived inside a “valid-looking” image container. That breaks a lot of legacy assumptions in email/web/mobile filtering.
- It appears designed for specific Samsung Galaxy models (strings reference S22/S23/S24 and foldables), which suggests careful QA and targeting—not spray-and-pray.
Why patching wasn’t enough (and why AI detection matters)
Answer first: Patching closes the door after the lock is known and replaced; LANDFALL used the door while it was still invisible.
Samsung patched CVE-2025-21042 in April 2025, and Unit 42 notes there’s no ongoing risk for updated devices. That’s good news for consumers, but it doesn’t solve the enterprise security problem:
- The campaign was active in mid-2024, long before defenders had public write-ups, IOCs, or clear detection guidance.
- The exploit was in the wild prior to the patch, which means compromise can occur in the “unknown unknowns” window.
- Mobile fleets are messy. Even disciplined orgs have exceptions: bring-your-own-device, travel phones, contractor devices, delayed carrier updates.
This is where AI-driven threat detection is not marketing fluff—it’s operationally necessary.
What AI can catch when signatures can’t
LANDFALL’s tradecraft leaves traces that don’t depend on knowing the CVE.
AI-based detection can focus on behavioral anomalies and multi-signal correlation, such as:
- Unusual native library loads (
.somodules) initiated by a media parsing event - A process suddenly using
LD_PRELOADexecution paths in contexts where it shouldn’t - Suspicious access to app-private paths like
/data/data/.../files/used as staging - A “benign binary” (
/system/bin/id) invoked with unexpected environment variables - TLS connections with certificate pinning to new, low-reputation domains on non-standard ephemeral ports
- File system activity around messaging media directories (e.g., monitoring WhatsApp Media paths)
The reality? A strong mobile defense program looks less like “we block bad hashes” and more like “we detect impossible sequences of actions.” AI is well-suited to spotting those sequences.
The DNG exploit trend: not a one-off, a recurring class
Answer first: DNG parsing bugs have become a repeatable entry point for mobile spyware, across both Android and iOS.
LANDFALL sits in a broader 2025 storyline:
- CVE-2025-21042 (Samsung) — exploited in LANDFALL, patched April 2025
- CVE-2025-21043 (Samsung) — another DNG-related issue in the same library, patched September 2025
- CVE-2025-43300 (Apple iOS) — DNG parsing zero-day patched August 2025
- CVE-2025-55177 (WhatsApp) — chained with Apple’s DNG bug for zero-click delivery in sophisticated attacks
This matters for security leaders because it pushes a practical stance:
Treat “image parsing” as an exposed attack surface, not as harmless content handling.
If you’re running a high-risk program (execs, journalists, regional operations, government-adjacent work), media-based exploitation should be part of your threat model the same way phishing and credential theft are.
What LANDFALL tells us about commercial spyware operations
Answer first: LANDFALL looks like a paid capability with modular design, targeting discipline, and infrastructure built to blend in.
Unit 42 describes LANDFALL as commercial grade, likely tied to private-sector offensive actors (PSOAs), and used in targeted intrusion activity in the Middle East. They did not publicly attribute it to a named vendor, but noted overlaps in infrastructure patterns with known regional activity.
From a defender perspective, “commercial grade” usually implies:
- Modularity: the first-stage loader is lightweight and pulls capabilities on demand
- Operational security: cleanup routines, anti-debug checks (Frida/Xposed), and selective targeting
- Persistence and privilege strategy: here, SELinux policy manipulation is a big red flag for intent
A practical risk framing for enterprises
Most orgs don’t need to assume everyone is a LANDFALL target. But you do need to assume some roles are:
- Executive leadership and their assistants
- Legal, compliance, and negotiation teams
- Regional operations in geopolitically tense environments
- Journalists, researchers, and NGO partners using corporate devices
That’s why I like a tiered approach: apply your strictest mobile controls to the highest-risk users first, then expand.
How to use AI to defend against exploit chains like LANDFALL
Answer first: The winning strategy is to combine AI-based anomaly detection with automated response playbooks—because exploit chains move faster than human triage.
Here’s a practical blueprint you can implement without pretending you can “predict every zero-day.”
1) Build a mobile-focused telemetry baseline (then let AI flag deviations)
You can’t detect “weird” if you don’t know “normal.” At minimum, baseline:
- Typical outbound destinations and ports per device group
- Normal library loading and process tree patterns for your managed apps
- Frequency of access to sensitive data stores (contacts, call logs, microphone)
Then apply machine learning to identify:
- Rare sequences (media opened → native code execution → new outbound TLS → data access)
- Rare destinations (new domains with clean-looking names and short lifetimes)
- Rare privilege shifts (SELinux-related operations, abnormal file labeling changes)
2) Prioritize “impossible” behavior over “known bad”
LANDFALL is a reminder that “known bad” arrives late.
Policies that are high-signal and low-noise:
- Block or isolate apps/processes that execute with
LD_PRELOADunexpectedly - Alert on new app-private staging directories with executable artifacts (
.so,.dex) - Alert on repeated failed network beacons with structured POST bodies and fixed user agents
3) Automate containment for mobile anomalies
If an AI model flags a high-confidence exploit chain pattern, your playbook should be able to do something immediately:
- Quarantine the device from corporate resources (conditional access)
- Revoke tokens and re-auth sessions (SSO/IdP)
- Snapshot triage artifacts (device logs, network telemetry, app inventory)
- Force OS update compliance checks before re-admission
Speed matters. Commercial spyware operators don’t wait politely while you open a ticket.
4) Hunt for the “image-to-execution” pivot
If you run threat hunting, add a hypothesis that fits LANDFALL-style tradecraft:
- “A media file led to code execution and C2 within minutes.”
Then look for:
- New executable files written shortly after media receipt
- Messaging app media directories involved in follow-on activity
- Short-lived C2 domains that look like generic content sites
People also ask: “Are Samsung users safe now?”
Answer first: Updated Samsung devices are protected from the specific LANDFALL exploit using CVE-2025-21042, which was patched in April 2025.
The bigger issue isn’t this one CVE. It’s the repeat pattern: DNG and media parsing vulnerabilities keep showing up, and attackers keep investing in zero-click delivery.
So the actionable question for security teams becomes: How quickly can you detect exploitation behavior even when the vulnerability is unknown? That’s exactly the kind of problem AI in cybersecurity is built to handle.
What to do this quarter if you want fewer mobile surprises
Answer first: Focus on controls that reduce the blast radius of a zero-day and shorten your time-to-detection.
A realistic next-90-days checklist:
- Enforce OS update minimums for managed devices and block access for laggards
- Tier your mobile policies: stronger controls for high-risk roles (execs, travel, sensitive regions)
- Add AI-driven anomaly detection for mobile network behavior and process execution signals
- Integrate mobile signals into your SOC so unusual device events create actionable incidents
- Run a tabletop exercise for “zero-click spyware suspected” with clear containment steps
If you’re leading security, I’d push for one measurable metric: median time from suspicious mobile anomaly to containment. If it’s hours (or days), that’s a gap LANDFALL-class operators can drive a truck through.
Most enterprises didn’t miss LANDFALL because they were careless. They missed it because the window between “exploited in the wild” and “well-understood publicly” is where traditional detection struggles. AI-driven threat detection closes that window—by watching behavior, correlating weak signals, and triggering containment before spyware settles in.
Forward-looking question: when the next media-parsing zero-day hits, will your security stack wait for IOCs—or will it catch the exploit chain while it’s still unfolding?