Learn how LANDFALL Android spyware used a Samsung zero-day—and how AI-driven detection can spot mobile exploit behavior before advisories land.

AI vs Android Spyware: Lessons from LANDFALL
A single image file shouldn’t be able to turn a modern smartphone into a pocket-sized surveillance device. Yet that’s exactly what happened in the LANDFALL campaign: commercial-grade Android spyware delivered through malformed DNG “photo” files that exploited a Samsung zero-day (CVE-2025-21042).
The part that should make security teams uncomfortable isn’t just the exploit—it’s the timeline. Artifacts tied to the campaign showed up as early as July 2024, while the vulnerability was patched in April 2025. For months, this operation blended into normal mobile behavior: people receive images, their device processes images, and nobody expects that moment to be a remote code execution entry point.
This post is part of our AI in Cybersecurity series, and I’m going to be opinionated: mobile spyware is now a detection problem first, and a patching problem second. Patches matter, but the organizations that consistently avoid impact are the ones using AI-driven detection and response to spot abnormal behavior before a public advisory tells them what to look for.
What LANDFALL tells us about the new mobile attack surface
LANDFALL is a reminder that the most “boring” parts of your tech stack—like image parsing—are often the most valuable to attackers.
Unit 42 uncovered a previously unknown spyware family, LANDFALL, designed specifically for Samsung Galaxy devices and used in targeted intrusions, with indicators pointing to victims in parts of the Middle East (including Iraq, Iran, Turkey, and Morocco). The spyware delivered broad surveillance capabilities: microphone recording, location tracking, and theft of contacts, call logs, photos, messages, and files.
Here’s the operational pattern that matters for defenders:
- Delivery mechanism: malicious DNG image files (raw photo format) that looked like normal WhatsApp images
- Exploit: CVE-2025-21042, a Samsung image-processing library bug in
libimagecodec.quram.so - Likely delivery style: possibly zero-click (the act of the device parsing the image is enough)
- Persistence / privilege path: included a SELinux policy manipulation component, indicating an intent to push past Android’s built-in guardrails
The bigger theme: DNG parsing has become a repeatable pressure point across platforms. In 2025, similar disclosure patterns showed up in Apple’s ecosystem and in WhatsApp-related exploit chains. Attackers go where content is processed automatically—and images are processed constantly.
Why “just patch” is necessary but not sufficient
Samsung patched CVE-2025-21042 in April 2025, and later patched a related issue (CVE-2025-21043) in September 2025. That’s good news for current users.
But enterprise security doesn’t get to live in the “everyone is patched” fantasy. Reality includes:
- BYOD devices that lag on updates
- carrier-delayed firmware rollouts
- executives traveling with roaming constraints
- devices outside MDM enforcement
- “shadow” WhatsApp usage for business
AI-driven security helps during the months where patch coverage is incomplete—and when the exploit isn’t publicly known yet.
How the LANDFALL exploit chain worked (and why it’s hard to catch)
The technique is clever and depressing: the malware was appended as a ZIP archive at the end of the DNG file, then extracted post-exploitation. To a casual inspection, it’s “an image.” To the vulnerable parser, it’s a trigger.
Unit 42 identified two embedded components:
b.so(“Bridge Head”): a native ARM64 loader/backdoor that initiates command-and-control (C2), fingerprints the device, and stages additional modulesl.so: a SELinux policy manipulator, extracted from an XZ-compressed payload and used to elevate capability/persistence by weakening enforcement
This is the kind of chain that defeats simplistic mobile defenses because:
- The initial file arrives through a trusted user workflow (messages/media).
- The exploit happens inside a vendor library (image codec), not an “app install.”
- The loader is modular—meaning your static signature coverage is always behind.
- It uses operational security: TLS, certificate pinning, ephemeral ports, environment checks, and anti-instrumentation.
The uncomfortable detail: staged payloads reduce observable “noise”
LANDFALL’s loader is built to download more functionality only after it has a foothold. That means the first-stage artifact can look small and “incomplete” in isolation.
From a defense perspective, staged malware shifts the detection challenge toward:
- behavioral analytics (what did the process do?)
- sequence detection (what happened right after the image was parsed?)
- device-wide telemetry correlation (network + filesystem + process)
That’s exactly where AI-based anomaly detection tends to outperform manual rule-writing.
Where AI helps: detecting zero-day style mobile attacks by behavior
AI won’t magically “know” CVE-2025-21042 exists. What it can do is recognize that the device is acting unlike itself, and unlike the broader fleet, in ways that align with exploit staging.
A practical stance for defenders: treat zero-days as an anomaly problem. You’re looking for weird combinations, close together in time.
1) AI spotting exploit aftermath, not exploit code
For mobile spyware like LANDFALL, some of the most useful detection signals are side effects:
- An image parsing event immediately followed by creation of executable
.sofiles in app-private storage - Unexpected use of
LD_PRELOADby processes that shouldn’t be preloading libraries - Creation and permission changes of suspicious files (e.g., setting a staged
.soto broad permissions) - Attempts to access SELinux-related system interfaces or manipulate policy in memory
- Processes spawning shells (
/system/bin/sh -c ...) to run system binaries in odd contexts
AI models can learn “normal” execution patterns and flag when a device suddenly begins doing things typical of post-exploit staging.
2) AI-driven network anomaly detection for stealthy C2
Unit 42 noted that LANDFALL communicated over HTTPS, used non-standard ephemeral TCP ports, could send ping telemetry, and used certificate pinning.
That combination is useful to defenders because even when payloads are encrypted, traffic shape still leaks intent. AI-assisted network analytics can flag:
- rare destinations or domains across the fleet
- TLS sessions with unusual port patterns
- repeated beacon loops with consistent timing (sleep/retry behavior)
- device-to-domain relationships that don’t fit user behavior (e.g., “healthy lifestyle” domains that no employee ever visits via a browser, yet mobile endpoints repeatedly contact)
This is where AI-powered threat intelligence and clustering helps: it can connect a “weird but low confidence” event on one device to a pattern across ten devices.
3) AI for prioritizing patching when you can’t patch everything instantly
Most companies don’t have a patching problem. They have a prioritization problem.
When a mobile ecosystem has repeated parsing vulnerabilities (DNG is a good example), AI can help you:
- identify which device models and builds are present (and which match known target strings)
- correlate exposure with user risk (execs, journalists, legal, M&A, regional travel)
- automate “risk-based” enforcement in MDM (block risky workflows until patched)
This turns patching from a calendar exercise into an exposure reduction program.
What to do now: a practical defensive playbook for enterprises
If your organization supports Android devices (corporate-owned or BYOD), you need a plan that assumes: mobile threats will arrive as content, not apps.
Tighten controls around high-risk content paths
You don’t need to ban messaging apps to reduce risk. Start with targeted controls:
- MDM enforcement: minimum OS/security patch level, block devices that fall behind
- Restrict unknown file handlers: limit third-party “gallery” and media parsers in managed profiles
- Disable risky developer settings: USB debugging, unknown sources, and high-risk accessibility permissions
- Conditional access: require compliant device posture for corporate email and files
Instrument mobile telemetry that AI can actually use
AI only works if you collect the right signals. Focus on:
- device posture + build strings
- app install inventory changes
- network destinations (DNS/HTTP metadata where possible)
- abnormal process behaviors (where EDR/MTD supports it)
- file creation patterns in app-private directories
If you can’t get deep endpoint telemetry on mobile, compensate with network-based detection and identity-based controls.
Build an “exploit chain” response runbook
Commercial spyware campaigns move fast, but your response can be predictable. Write down what happens when you suspect mobile compromise:
- Quarantine device access to corporate resources (identity/conditional access)
- Preserve logs and mobile threat defense alerts
- Identify potential exposure window and message/media artifacts
- Rotate credentials and tokens used on the device
- For high-risk users, treat it like an endpoint breach: lateral movement starts with identity
A strong AI-in-cybersecurity program shines here: it helps you scope quickly (who else shows similar anomalies?) and reduces time spent guessing.
The bigger lesson: commercial spyware is a business, so defend like it’s a business
LANDFALL appears consistent with commercial-grade tooling—modular design, staged components, stealth tradecraft, and infrastructure patterns that resemble other private-sector offensive actor ecosystems. That matters because it implies repeatability.
When attackers can sell an exploit chain, they invest in:
- reliability across specific device models
- operational monitoring (beacons, runner modes, telemetry)
- evasion against popular analysis tools
Your defense should mirror that professionalism. AI in cybersecurity isn’t about replacing analysts—it’s about giving them the speed and pattern recognition to keep up with industrialized intrusion.
If you’re responsible for protecting a fleet of Android devices, the question isn’t whether the next LANDFALL-style campaign exists. It’s whether your visibility is good enough to spot the behavioral fingerprints of a compromise before the advisory lands. What signal in your environment would you expect to see first: the weird image file, the odd process behavior, or the strange outbound beacon?