Privacy-Preserving Cybersecurity Apps: The Better Path

AI in Defense & National SecurityBy 3L3C

India’s app mandate rollback shows why mobile security can’t look like surveillance. Here’s how AI enables privacy-preserving cybersecurity without coercion.

AI governanceMobile securityPublic sector cybersecurityPrivacy by designTelecom fraudDefense and national security
Share:

Privacy-Preserving Cybersecurity Apps: The Better Path

India’s telecom regulator tried to do something that sounds, on paper, like common sense: get a cybersecurity app into everyone’s hands. The plan was simple and forceful—require smartphone makers to pre-install a government app, make it clearly visible, and prevent users from disabling it. Then the backlash hit fast, and the mandate was rolled back.

That reversal matters well beyond India. It’s a clean case study for a problem defense and national security leaders keep running into: when security tooling looks like surveillance infrastructure, trust collapses—and adoption goes with it.

This post sits in our AI in Defense & National Security series, where the recurring theme is the same: capability isn’t the hard part anymore; legitimacy is. If your security program can’t earn consent (or at least credible oversight), it becomes politically fragile. The interesting part is that AI can actually help here—not by collecting more data, but by reducing how much data you need and tightening how it’s used.

India’s rollback shows the real risk: security without trust

The key lesson is that coercive deployment turns a security control into a governance crisis. India’s Department of Telecommunications (DoT) ordered manufacturers to pre-install the Sanchar Saathi app on new phones, push it to existing devices, and ensure users couldn’t disable it. Within days, the order was retracted.

The app’s purpose is not frivolous. India’s mobile footprint is enormous, and much of the country’s “cybercrime problem” is really phone-enabled crime at scale: device theft, SIM swap fraud, spam, and digitally mediated financial scams.

Government-reported impact numbers were the kind policymakers love:

  • 14+ million downloads since launch in January 2025
  • 4.2 million lost/stolen devices deactivated
  • 2.6 million traced
  • 700,000+ recovered
  • 14 million mobile connections disconnected via “Not My Number”
  • 600,000 IMEIs blocked for fraud linkage
  • 4.75 billion rupees in prevented losses (about $53 million)

Even if you treat these as directional rather than independently verified, they point to something real: citizens want help defending themselves on mobile.

So why did the mandate fail? Because the mandate wasn’t about the feature set—it was about control. A government-issued, undeletable app on every phone reads as a surveillance primitive, especially in countries where past spyware allegations and unclear legal constraints have already damaged public confidence.

The “pre-install and can’t disable” pattern is a red flag

For security teams, this is familiar. If a vendor told you their endpoint agent “can’t be disabled,” you’d ask: by whom? under what authority? with what audit trail? The public asked the same questions.

Three design choices make any app feel surveillance-adjacent:

  1. Forced installation (no meaningful consent)
  2. Irremovability (no exit)
  3. Opaque permissions and data flows (no intelligible boundaries)

That combination almost guarantees a backlash—even when the underlying mission is defensible.

Sanchar Saathi’s promise is real—and that’s exactly why governance matters

If an app helps people block stolen devices and identify fraudulent SIM activity, it’s doing legitimate public-safety work. The uncomfortable truth is that the same primitives that stop fraud can also enable tracking.

Sanchar Saathi’s model centers on a national-scale mapping of devices and identifiers (like IMEI), plus citizen reporting. That’s useful for:

  • Disabling stolen devices to reduce resale value
  • Detecting SIM registrations that don’t match a user’s identity
  • Flagging suspicious numbers and fraud-linked devices

It’s also useful for:

  • Building population-scale device graphs
  • Associating devices with people over time
  • Enabling investigative queries that creep beyond the original scope

In national security contexts, this dual-use reality is unavoidable. The solution isn’t pretending the risk doesn’t exist. The solution is hard constraints plus verifiable accountability.

The policy problem: exemptions destroy credibility

Many countries now recognize privacy as a right in principle, but create exceptions for the state that are broad, quiet, or hard to contest. Once citizens believe the rules aren’t symmetric, any “security app” becomes suspect.

Here’s the stance I’ll take: mandates are a last resort, not a rollout plan. If a control can’t attract adoption voluntarily, you probably haven’t explained it well enough, audited it deeply enough, or constrained it tightly enough.

Where AI fits: you can reduce surveillance risk by reducing data hunger

AI helps most when it lets defenders detect threats without collecting or centralizing sensitive data. That sounds abstract, so let’s make it concrete.

A typical “anti-fraud” approach is to centralize a lot of raw telemetry: call metadata, device attributes, location signals, contact graphs, and app behaviors. That’s effective—and extremely tempting to repurpose.

A more privacy-preserving approach uses AI in ways that change the architecture:

1) On-device AI for fraud and scam detection

Best use case: detect scam patterns (smishing, vishing prompts, malicious links) locally.

  • The model runs on the device.
  • The device produces a risk score or classification, not raw content.
  • Only minimal signals leave the phone, ideally aggregated.

This matters because it turns the phone into the sensor and the firewall—without turning the state into the data warehouse.

2) Federated learning to improve models without collecting user data

Best use case: improve national-scale models for spam and fraud while keeping training data local.

Federated learning lets devices (or carriers) train local model updates and send parameter updates rather than raw messages or call logs. You still need careful protections (like secure aggregation and update validation), but the privacy posture is categorically better than “upload everything.”

3) Privacy-enhancing techniques for analytics

If the state or a regulator needs population-level insight (for example, “which IMEIs are strongly correlated with fraud campaigns”), AI can operate on:

  • Differentially private aggregates (useful counts without individual reconstruction)
  • Tokenized identifiers with rotation and strict retention
  • Tiered access controls where investigative queries require justification and logging

The goal is blunt: make surveillance harder than security. Good systems do that by design.

4) Anomaly detection with bounded scope

Anomaly detection is powerful in telecom environments—spotting abnormal SIM registrations, bursty activation patterns, IMEI reuse anomalies, or unusual call routing behavior.

But anomaly detection should be scope-limited:

  • Detect suspicious events, not build continuous dossiers.
  • Alert carriers or users first, not law enforcement by default.
  • Require a legal threshold and an audit trail for escalation.

This is exactly where AI governance becomes operational governance.

A better blueprint than mandates: “trustable security” requirements

If a government wants widespread adoption of a cybersecurity app, it should ship a trust framework before it ships the app. Here’s a practical checklist—useful for regulators, but also for enterprises building high-trust mobile security programs.

Transparency requirements (non-negotiable)

  • Permission-by-permission explanations written for normal people
  • A clear statement of what the app does not do (and how that’s enforced)
  • Public data-flow diagrams: what’s collected, where it goes, retention periods
  • Change logs for updates that affect data access or behavior

Independent assurance (the difference between “trust us” and trust)

  • Independent security audits with publishable summaries
  • Reproducible builds or equivalent integrity measures
  • A disclosed process for vulnerability reporting and response SLAs

User control that’s real, not performative

  • Uninstall or disable should be possible unless a court orders otherwise
  • If certain functions must remain (for example, stolen-device blocking), isolate them as minimal services with narrow permissions
  • Provide granular toggles (spam reporting on/off, fraud alerts on/off)

Legal guardrails that engineers can implement

  • Purpose limitation: fraud prevention ≠ general intelligence collection
  • Access controls: who can query what, under which conditions
  • Retention caps: data deletion as a default, not a promise
  • Auditability: immutable logs, periodic transparency reporting

If those controls feel heavy, good—they’re cheaper than rebuilding legitimacy after a scandal.

What security leaders can do now (even outside government)

Enterprises and critical infrastructure operators face the same trust problem when deploying mobile threat defense, authentication apps, or monitoring agents—especially across BYOD fleets and contractors.

Here’s what works in practice:

  1. Adopt “minimum telemetry” as a design principle

    • If you can solve it with a local model and a risk score, don’t centralize raw content.
  2. Separate security operations from identity and HR data

    • Blended datasets create internal surveillance fears and increase breach impact.
  3. Publish an internal “mobile data bill of rights”

    • Spell out what you collect, why, and what you explicitly prohibit.
  4. Use AI for prioritization, not pervasive monitoring

    • Let AI rank incidents and likely fraud signals; keep human review and escalation criteria explicit.
  5. Make audits routine

    • Quarterly reviews of permissions, retention, and access logs are boring—and that’s the point.

The bigger national security takeaway: resilience beats compulsion

India’s rollback doesn’t prove that public-sector cybersecurity apps are bad. It proves something more useful: when security programs are built like surveillance systems, they trigger resistance that weakens security outcomes.

AI gives governments and enterprises a chance to choose a different path—one where detection improves while data collection shrinks, and where oversight is built into the technical architecture rather than stapled on later.

If you’re building AI in defense and national security, this is the bar you should aim for: systems that can defend a population without treating the population like suspects. What would your next security control look like if public trust were a hard requirement, not a communications problem?

🇺🇸 Privacy-Preserving Cybersecurity Apps: The Better Path - United States | 3L3C