Cellik Android RAT: When “Trusted Apps” Aren’t Safe

AI in Cybersecurity••By 3L3C

Cellik shows how Android RATs abuse trusted apps. Learn how AI-driven mobile threat detection spots anomalies and stops account takeovers faster.

android-securitymobile-malwarethreat-intelligenceai-securityincident-responseedr
Share:

Cellik Android RAT: When “Trusted Apps” Aren’t Safe

Most security teams still treat mobile as “MDM plus a policy doc.” That works right up until a remote access Trojan (RAT) like Cellik shows up and proves the weak point isn’t the phone—it’s the assumption that trusted distribution equals trusted software.

Cellik is being sold as RAT-as-a-service, and the standout detail is how it abuses trust: it can help attackers pull legitimate apps, wrap them with a malicious payload, and repackage them for distribution. The pitch to buyers is simple: hide inside what users already recognize and install.

This post is part of our AI in Cybersecurity series, and I’ll take a clear stance: mobile malware has outgrown purely manual review and signature-based controls. If you want to catch threats that behave “mostly normal” until they don’t, you need AI-driven detection that focuses on behavior, anomalies, and relationships across devices, apps, and identities.

What makes Cellik different (and why defenders should care)

Cellik’s core capabilities aren’t exotic; the risk comes from how complete and commercialized the package is. This matters because commercialization lowers the bar—more attackers can run higher-quality mobile spyware campaigns without deep technical skill.

Once installed on an Android device, Cellik can provide an operator with near-total control, including:

  • Screen streaming and remote control (operator can “drive” the phone)
  • Keylogging and notification access, including OTPs and alert history
  • Browser data theft, such as cookies and autofill credentials
  • File system access and encrypted exfiltration, including cloud-linked directories
  • Hidden browsing actions that can click links and submit forms without the user noticing

That list should immediately trigger a mental model shift: this isn’t “just malware,” it’s account takeover infrastructure that happens to run on a phone.

The Play Store angle: trust is the attack surface

Cellik reportedly includes tooling to:

  1. Browse and download legitimate applications
  2. Wrap them with a Cellik payload
  3. Rebuild a poisoned APK for distribution

Even if the poisoned version isn’t delivered directly through Google Play, the brand familiarity and user trust originate there. Attackers can distribute these trojanized apps through sideloading channels (messages, forums, fake “update” prompts, third-party stores), while benefiting from the victim’s assumption that “this app is real.”

The big takeaway: the store isn’t the only problem. The reputation halo of the store is.

How Cellik infections happen in the real world

Cellik doesn’t need a fancy exploit chain to succeed. It succeeds because humans install things.

Here are common patterns I see across mobile incidents, and they map cleanly to a threat like Cellik:

Social engineering beats “secure by default”

Android has improved dramatically, but a user who’s convinced to install a repackaged APK can still be tricked into granting dangerous permissions or enabling accessibility services.

Typical lures include:

  • “Your package delivery failed—install this to reschedule.” (seasonal spike risk in December)
  • “Your bank needs a security update.”
  • “Install this corporate app to access email / payroll.”
  • “You need this version to watch the video / join the call.”

December is particularly messy: more travel, more temporary staff, more personal shopping on work phones, and more login prompts flying around. Attackers plan around that reality.

The overlay trick: stealing credentials without breaking anything

A key feature described in the research is app injection / overlays—placing a fake login screen on top of a real app. The user thinks they’re signing into a known service. The attacker collects credentials and often an OTP too.

This defeats a surprising number of “good” controls because:

  • The real app is still present
  • The login attempt looks legitimate to the user
  • Many orgs still rely on SMS OTP or push approvals without strong phishing resistance

Where AI-driven mobile threat detection actually helps

“Use AI” isn’t a plan. But AI-powered threat detection becomes practical when you define the behaviors that should never happen in a normal app and let models flag deviations at scale.

Here’s what AI can do better than manual review for a threat like Cellik.

1) Detect suspicious app behavior, not just malicious code

Static scanning and signatures struggle when payloads are repackaged or lightly obfuscated. Behavior doesn’t hide as easily.

AI-driven anomaly detection can spot patterns such as:

  • Accessibility services enabled shortly after install, followed by unusual UI interaction patterns
  • Apps that capture screens or interact with other apps in ways that don’t match their category
  • Unexpected background services, wake locks, and persistence behaviors
  • Abnormal network beacons (timing, destination clustering, protocol misuse)

A practical rule of thumb: if an app behaves like a remote desktop tool but claims to be a “coupon scanner,” it’s lying. AI models can score that mismatch.

2) Correlate weak signals across devices and identities

Mobile attacks rarely stay “mobile.” The phone is a bridge into email, SSO sessions, password managers, and MFA.

AI helps when it correlates:

  • A new app install + immediate MFA prompts
  • A device receiving many OTP notifications + suspicious login attempts from new geographies
  • Browser cookie access patterns + unusual session refresh activity

That correlation is how you catch campaigns early—before the attacker uses the phone to pivot into corporate systems.

3) Speed up triage with automated security operations

Cellik is sold as a service because it’s efficient for attackers. Defenders need the same efficiency.

With the right automation, you can:

  1. Quarantine the device (conditional access, VPN cut, email token revoke)
  2. Force re-authentication for sensitive apps
  3. Invalidate sessions and rotate credentials at risk
  4. Pull mobile telemetry (app list, permission changes, risky settings)
  5. Open a case with a prebuilt playbook (SOAR + EDR/MDR workflow)

AI doesn’t replace incident response. It removes the delay between “something’s weird” and “we’ve contained it.”

Defensive playbook: what to do this quarter (not next year)

If you’re defending an enterprise fleet—or even a BYOD environment—here’s what works against Play Store-adjacent trojanization threats.

Lock down how apps get installed

Answer first: reduce sideloading and restrict unknown app sources.

Concrete steps:

  • Enforce policies that block or heavily restrict installing from unknown sources
  • Use managed app catalogs for corporate-required apps
  • Monitor for “install from unknown sources” toggles changing state
  • Create a process for exceptions (with time-bound approval)

If your organization routinely asks users to install internal APKs outside official channels, you’re training them to accept the exact behavior attackers rely on.

Treat OTP interception as a breach indicator

Answer first: if a device can read notifications and log keystrokes, OTP isn’t a control—it’s a speed bump.

Upgrade authentication where it matters:

  • Prefer phishing-resistant MFA (hardware-backed keys or platform passkeys)
  • Reduce SMS OTP usage for privileged access
  • Add step-up authentication for high-risk actions (new payees, password resets, admin console access)

Implement mobile-focused EDR/MDR coverage

Answer first: endpoint detection and response shouldn’t stop at laptops.

When evaluating mobile security tooling, ask whether it can:

  • Detect overlay abuse and accessibility misuse
  • Track app permission escalation over time
  • Surface unusual screen capture / remote control indicators
  • Provide quick isolation actions integrated with identity controls

Use AI for continuous app risk scoring

Answer first: don’t rely on “approved once, trusted forever.”

A strong AI-assisted approach scores risk continuously using:

  • App metadata and signer reputation
  • Behavioral telemetry (device + network)
  • Peer-group baselines (“apps like this usually do X, not Y”)
  • Threat intel matches (domains, certificates, infrastructure reuse)

This is especially valuable because attackers can change packaging faster than humans can re-review.

“People also ask” (the questions security teams raise in meetings)

Can malware really hide inside a legitimate Android app?

Yes. If an attacker can rebuild or repackage an app (often via an APK), they can add malicious code and ship a version that looks legitimate to users. The danger is highest when users sideload.

Does Google Play Protect stop threats like Cellik?

Play Protect helps, but it’s not a guarantee—especially if the malicious version is distributed outside the Play Store or if the attacker’s packaging avoids obvious static signatures. You still need enterprise controls and behavioral detection.

What’s the fastest sign a phone is compromised by a RAT?

The fastest indicators tend to be unexpected permission grants (especially accessibility), unusual background activity, and a sudden burst of MFA prompts / account lockouts tied to that user.

Why this matters for AI in Cybersecurity

Cellik is a clean example of the broader trend we’ve been tracking in this series: attackers are productizing intrusion. When malware becomes a subscription, defender response has to become equally systematized.

AI is useful here for one reason: it scales judgment. It can evaluate app behavior, correlate weak signals across systems, and trigger containment steps faster than a human analyst can piece together screenshots and logs.

If you’re relying on “only install from trusted places” as your primary mobile defense, you’re betting on user perfection. That bet loses.

Your next step is to map your mobile stack to three questions:

  1. Can we prevent sideloaded apps by default?
  2. Can we detect overlay/remote-control behaviors quickly?
  3. Can we contain a suspected mobile RAT in minutes, not days?

If the honest answers are “no” or “not sure,” you’ve got a clear 2026 roadmap item—before the next trojanized “trusted app” turns into an incident you can’t ignore.