LANDFALL shows how zero-click DNG images delivered spyware to Samsung devices. See how AI-based threat detection spots exploit patterns early.

LANDFALL Spyware: How AI Catches Zero-Click Images
A single image file can be a full malware installer. That’s not a metaphor—Unit 42 documented a commercial-grade Android spyware family (LANDFALL) delivered through malformed DNG images that exploited a Samsung zero-day (CVE-2025-21042).
Most security teams still treat mobile messaging attachments like “user data,” not “executable risk.” LANDFALL is what happens when attackers exploit that blind spot: an image arrives via a chat app, the device parses it, and spyware components drop onto disk without the victim tapping anything.
This post is part of our AI in Cybersecurity series, and I’m going to take a stance: enterprises won’t patch their way out of zero-click mobile threats. You need AI-based threat detection that spots exploit delivery patterns early—especially when the payload is hiding inside “normal” formats like images.
What LANDFALL tells us about modern mobile intrusions
Answer first: LANDFALL shows that attackers are treating mobile devices—especially employee Samsung phones—as high-value endpoints, using commercial spyware tradecraft and zero-day exploit chains to get there.
Unit 42 found LANDFALL embedded in malformed DNG (Digital Negative) image files. The files were crafted to exploit a bug in Samsung’s image processing library (libimagecodec.quram.so) tracked as CVE-2025-21042 (Samsung’s SVE-2024-1969). The campaign was active mid-2024 through early 2025, months before Samsung patched the issue in April 2025.
Two details matter for defenders:
- The delivery format is “boring.” The samples had names like
IMG-20240723-WA0000.jpgandWhatsApp Image 2025-02-10...jpeg, implying distribution through WhatsApp media flows. - The payload is modular. The malicious DNG contained an appended ZIP that dropped native shared objects. The first-stage loader (
b.so) then staged and executed follow-on modules, including a SELinux policy manipulator (l.so) designed to weaken Android’s mandatory access controls.
Even though the specific CVE is patched, the pattern is the story: image parsing bugs + messaging delivery + commercial spyware is now a repeatable playbook across mobile platforms.
Why DNG images keep showing up in exploit chains
Answer first: DNG is attractive because it’s complex, it’s widely supported, and it gets processed automatically by libraries you don’t usually monitor.
In 2025, multiple disclosures converged around DNG parsing:
- Samsung: CVE-2025-21042 (patched April 2025)
- Samsung: CVE-2025-21043 (patched September 2025)
- Apple: CVE-2025-43300 (patched August 2025)
- WhatsApp chain component: CVE-2025-55177 (disclosed August 2025)
That’s not random. It’s a sign that attackers and exploit developers are investing in file-format attack surfaces that sit in the “content handling” layer—where user interaction is minimal and telemetry is often weak.
How the LANDFALL infection chain works (in plain terms)
Answer first: The malicious image isn’t just an image. It’s a container that triggers code execution during parsing, then drops and runs spyware modules.
Unit 42’s analysis describes a flow that looks like this:
- A malformed DNG arrives (likely via a messaging app media workflow).
- When Samsung’s image library parses it, the exploit triggers code execution (CVE-2025-21042).
- The DNG contains an appended ZIP archive with embedded
.sopayloads. - The first-stage loader (
b.so, internally labeled “Bridge Head”) runs and prepares staging under an app-private directory:/data/data/com.samsung.ipservice/files/
- A second component (
l.so) is used to modify SELinux policy in memory, enabling actions that should be blocked on a properly confined device. - The loader beacons to command-and-control over HTTPS using non-standard ephemeral ports, with TLS pinning to resist interception.
If you’re building a mobile security program, here’s the uncomfortable truth: this is closer to endpoint compromise than “mobile malware.” The chain includes defense evasion checks (debugger/Frida/Xposed detection), stealth cleanup behaviors, and a staged architecture designed to grow capabilities over time.
What LANDFALL was built to collect
Answer first: LANDFALL’s strings and command paths indicate full-spectrum surveillance—location, microphone, calls, messages, photos, contacts, and arbitrary file access.
From Unit 42’s recovered artifacts, LANDFALL appears designed for:
- Device fingerprinting: IMEI/IMSI, installed apps, VPN status, USB debugging status
- Data theft: contacts, call logs, SMS/messaging data, camera photos, files and databases
- Active surveillance: microphone recording and call recording
- Persistence/execution: module loading, DEX loading, process injection,
LD_PRELOAD - Defense avoidance: anti-instrumentation, TLS pinning, artifact cleanup
Unit 42 also observed targeting strings referencing modern models (including Galaxy S22/S23/S24 lines and Fold/Flip variants). This wasn’t built for “old Android.” It was tuned for current, high-end devices.
Where AI-based threat detection fits (and where it doesn’t)
Answer first: AI won’t magically “predict” a zero-day, but it can reliably detect the behaviors and anomalies that zero-days need to succeed.
A patch stops yesterday’s exploit. AI is how you reduce exposure to the next one—especially when the delivery method looks like normal user activity.
Here are practical places AI and machine learning consistently help against campaigns like LANDFALL:
1) Anomaly detection for file-based delivery (images that aren’t really images)
Answer first: Malformed files and polyglot containers produce statistical and structural signals that detectors can learn.
LANDFALL’s DNG samples appended ZIP content to an image. That’s not common in legitimate media. AI-powered static analysis can flag:
- Unexpected file trailers (e.g., ZIP signatures after valid image end markers)
- Format inconsistencies (DNG/TIFF tag structure anomalies)
- Compression and entropy patterns that don’t match typical camera output
- Mismatched MIME vs internal structure (a “JPEG” behaving like a TIFF/DNG)
This matters in enterprise mobile environments because attachments frequently traverse:
- email gateways
- secure web gateways
- MDM/MAM content channels
- cloud storage sync
If your pipeline only does extension-based filtering, you’re trusting the attacker’s naming conventions.
2) Behavioral analytics on-device and in network telemetry
Answer first: Even stealthy spyware must touch the filesystem, spawn processes, and talk to infrastructure.
Unit 42 described working directories, staging filenames (aa.so, dec_a.so), LD_PRELOAD execution, and specific beacon formats (including a desktop Chrome user agent). AI can correlate weak signals like:
- unusual child processes from media handlers
- suspicious use of
LD_PRELOADon Android - creation of executable
.sofiles under app-private paths that don’t match the app’s normal behavior - repeated short-lived TLS sessions to rare domains or ephemeral ports
- beacon patterns that repeat across devices (timers, retry intervals, “suicide time” behavior)
A single signal might be noise. A cluster of them is a detection. That’s exactly where machine learning-based detection tends to outperform rule-only approaches.
3) Threat intelligence + CVE context that prioritizes what to hunt
Answer first: AI helps you operationalize vulnerability intelligence by turning “CVE news” into concrete hunting and policy changes.
When a zero-day like CVE-2025-21042 hits, security teams face two problems:
- patch coverage is uneven across fleets
- the exploit artifact is often unavailable publicly
AI-driven security platforms can map:
- device model + OS build strings (from inventory)
- patch level compliance
- exposure windows (who was unpatched during known exploitation)
Then you can focus hunting on the realistic blast radius instead of scanning everything, forever.
A mobile defense plan enterprises can actually run
Answer first: Treat mobile as endpoint security, not “BYOD hygiene,” and instrument your defenses around exploit chains.
If you manage Samsung devices in an enterprise fleet, a workable playbook looks like this:
-
Enforce patch SLAs for mobile OS and vendor components
- Make “security update installed” a conditional access requirement.
- Don’t negotiate on patch lag for execs and high-risk roles.
-
Inspect attachments beyond file extensions
- Perform deep file inspection in email/chat ingestion points where possible.
- Flag archive signatures embedded inside media formats.
-
Add mobile-specific detection engineering
- Alert on suspicious native library drops in app-private directories.
- Monitor for
LD_PRELOAD-style execution patterns (where visibility exists).
-
Harden messaging and media handling policies
- Reduce auto-download and auto-preview behaviors in managed configurations.
- Segment work messaging from personal chat apps when feasible.
-
Use AI to correlate low-signal indicators
- One odd domain isn’t enough.
- One weird file hash isn’t enough.
- Five weak signals across file + process + network + device posture is enough.
“Are we still at risk if CVE-2025-21042 is patched?”
Answer first: You’re not at risk from that specific exploit on fully updated devices, but you’re still at risk from the same delivery pattern.
Unit 42 notes Samsung patched CVE-2025-21042 in April 2025 and later addressed a similar issue (CVE-2025-21043) in September 2025. That’s good news. The bad news is that exploit developers clearly like this class of bug.
If you want a north-star metric for 2026 planning, use this: time-to-mitigate zero-click content parsing risk (via patching, AI detection, and policy controls). Track it like you track phishing.
What to do Monday morning
Answer first: Start by reducing your exposure window, then build detection around file parsing abuse.
Three concrete steps that don’t require a multi-quarter program:
- Audit Samsung patch levels across your fleet and identify devices that were unpatched prior to April 2025 (for historical risk review).
- Update content security controls to flag “media files with embedded archives” and send them to detonation/sandboxing where available.
- Write a hunt hypothesis: “Messaging-delivered image triggers library parsing, drops native
.so, beacons over TLS to rare infrastructure.” Then test it against your telemetry.
Zero-click mobile spyware isn’t hypothetical. LANDFALL ran for months before it was widely understood. AI in cybersecurity is how you stop waiting for perfect indicators and start catching the patterns that keep repeating.
If a single image can behave like an installer, what other “safe” file types in your environment are you still treating as harmless?