India’s rollback shows why mobile security can’t look like surveillance. See how privacy-preserving AI helps stop fraud while staying compliant.

India’s App Mandate Rollback: AI Security Without Spying
India’s Department of Telecommunications tried something most security teams would never accept inside an enterprise: an undeletable, mandatory cybersecurity app on every phone. The backlash was immediate, and within days the government rolled the mandate back.
That reversal matters far beyond India. It’s a live case study in the central tension of the AI in Defense & National Security conversation: you can’t build public safety—or enterprise security—on a foundation people don’t trust. And trust collapses fast when “security” looks indistinguishable from surveillance.
The useful part is that the underlying problem is real. Mobile devices are where fraud happens, where identities get hijacked, and where sensitive work data walks out the door. The lesson isn’t “don’t secure phones.” The lesson is don’t force intrusive controls when privacy-preserving security patterns exist—many of them AI-driven.
What India’s rollback really signals
The signal is simple: security mandates that remove user agency are politically—and operationally—fragile. Even if the tool provides genuine anti-fraud value, a mandate that can’t explain its data boundaries will eventually hit resistance from citizens, manufacturers, courts, regulators, and international partners.
India’s Sanchar Saathi program was designed to combat mobile theft, spam, SIM abuse, and fraud by tying action to device identity (IMEI) and user reporting. According to government claims reported at the time, the app had been downloaded by 14 million+ users and was associated with actions like:
- 4.2 million lost/stolen devices deactivated
- 2.6 million devices traced
- 700,000 devices recovered
- 14 million mobile connections flagged as “Not My Number” and disconnected
- 600,000 IMEIs linked to fraud blocked
- 4.75 billion rupees in estimated prevented losses (about $53M)
Even if you discount those numbers, the direction is believable: when mobile is the primary computing platform, mobile crime becomes national-scale crime.
Why the mandate failed (even if the app helped)
People don’t reject security—they reject asymmetric power. A preinstalled, undeletable, non-disableable app is a textbook example of asymmetric power because:
- It changes the threat model: the state becomes a privileged software publisher on your most personal device.
- It expands the blast radius: if the app is compromised (or abused), the impact is nationwide.
- It removes consent and recourse: you can’t opt out, even if you’re a journalist, activist, executive, or government employee with elevated risk.
In national-security contexts, this isn’t an abstract concern. Prior high-profile spyware allegations and forensic reporting in multiple regions have trained the public to ask one question first: “What else could this be used for?” If the answer is unclear, adoption turns into resistance.
The real security problem: mobile fraud at population scale
India’s mobile-first reality makes it a preview of what other countries are moving toward. When phones are the primary portal to banking, government services, and business communications, attackers don’t need sophisticated zero-days. They need scale, persuasion, and identity manipulation.
Recent years have shown how quickly AI accelerates that attacker playbook:
- Deepfake voice and video make “verified” calls feel real.
- LLM-written smishing improves message quality and targeting.
- Automated social engineering increases volume without sacrificing personalization.
The defense side has to respond at the same scale. But here’s the catch: national-scale defense can’t require national-scale monitoring. If your solution depends on collecting more personal data than the threat requires, you’re solving one risk by creating another.
A better framing for Sanchar Saathi-style programs
The winning framing is:
“Minimize data, maximize impact.”
Blocking stolen devices, validating IMEIs, and letting users report suspicious SIMs can be done with tight scoping. The controversy begins when controls drift into:
- continuous monitoring
- broad device permissions
- opaque data retention
- unclear sharing with other agencies
From a defense and national security perspective, that drift is costly because it undermines legitimacy—the one resource you can’t surge when things go wrong.
How privacy-preserving AI changes the tradeoff
AI is most useful here when it reduces the need to centralize sensitive data. That’s the opposite of many early “big data security” approaches, which tended to hoover up everything and hope governance catches up.
Below are three patterns I’ve seen work in practice when organizations want strong detection without turning phones into tracking beacons.
1) On-device AI for behavioral risk scoring
Do more detection on the device, send less data off the device. Modern mobile security can score risk locally by looking for signals like:
- suspicious accessibility-service abuse
- overlay attacks
- risky clipboard behaviors
- unexpected network destinations
- anomalous app behavior patterns
The output can be a privacy-preserving risk score or a minimal alert, not a full activity log.
Why it matters: if your program can say, “We don’t ingest your content—only a risk verdict,” you’ve changed the public conversation.
2) Federated learning for population-level improvement
Federated learning lets models improve across many devices without collecting raw user data centrally. Devices train locally; only model updates (often further protected via aggregation) are shared.
For national-scale anti-fraud, this is an attractive middle ground:
- you get better detection across diverse devices and languages
- you reduce incentives to build a massive centralized surveillance dataset
- you can align with privacy regulations more cleanly
Federated learning isn’t magic—implementation details matter—but it’s a real technical alternative to “centralize everything.”
3) Differential privacy and strict telemetry budgets
Telemetry should have a budget, not a blank check. Differential privacy techniques can add noise and reduce re-identification risk when collecting statistics.
The operational discipline that makes this credible:
- publish a telemetry schema (what fields exist)
- cap collection to the minimum required to meet specific outcomes
- rotate identifiers
- define retention windows (and enforce deletion)
This is where AI governance becomes practical: it’s less about slogans and more about enforceable limits.
What enterprises should learn (before the next mandate hits)
Government security policy changes faster than most corporate security programs. India’s rapid mandate-and-retraction is a reminder that your mobile and privacy posture can’t be brittle.
Here are concrete moves security leaders can make—especially in regulated industries or regions where device-level security requirements may shift quickly.
Build a “mandate-ready” mobile security architecture
Mandate-ready doesn’t mean “compliant with intrusive apps.” It means your controls can adapt without panic.
Prioritize:
- Zero Trust access for mobile: continuous authentication, device posture checks, least privilege.
- Strong device posture signals: jailbreak/root detection, OS version hygiene, risky app detection.
- Phishing-resistant authentication: passkeys / FIDO2 where possible.
- Privacy-by-design logging: prove you can secure without stockpiling personal data.
If a regulator asks for more assurance, you can offer better security evidence rather than more surveillance.
Use AI for triage, not for mass collection
A common mistake: collecting huge volumes of mobile telemetry “just in case” and hoping AI will sort it out.
A better approach:
- collect minimal signals
- use AI models to prioritize anomalies
- escalate to human review with strict access controls
- enrich only when an alert crosses a threshold
This limits exposure if logs are breached and reduces compliance headaches.
Prepare for “trust audits,” not just security audits
Security audits ask, “Is it safe?” Trust audits ask, “Is it defensible?” That includes:
- independent assessments of app permissions and data flows
- clear user communication about what’s collected and why
- documented legal basis and oversight pathways
- a credible opt-out or alternative path for high-risk users
If you can’t explain your mobile security program in plain language, you’re one headline away from a reputational incident.
People also ask: does anti-fraud require surveillance?
No—anti-fraud requires identity and device integrity, not blanket monitoring. The most effective anti-fraud systems focus on:
- verifying device authenticity (e.g., IMEI integrity, attestation)
- detecting anomalous transactions and account behavior
- blocking known bad infrastructure
- rapid user reporting and recovery workflows
Surveillance usually shows up when programs chase broad secondary goals (“we might need it later”), or when governance can’t keep up with the temptation to expand scope.
The stance I’ll take: if a security program can’t survive transparency, it shouldn’t be deployed at scale.
What to do next: a practical playbook for privacy-first AI security
If you’re building (or buying) AI-driven mobile security—whether for an enterprise workforce or a public-sector program—use this checklist to pressure-test your approach:
- Permission minimization: Does the solution request only what it needs for specific outcomes?
- On-device first: Can detection happen locally with minimal cloud dependence?
- Telemetry budget: Do you have a published schema, retention limits, and deletion enforcement?
- Model governance: Can you explain model inputs, outputs, and failure modes to non-ML stakeholders?
- Independent validation: Are there third-party audits for security and privacy controls?
- Abuse resistance: What prevents insiders (or agencies) from using the system for non-security purposes?
That last point is the hard one—and it’s exactly where many well-intended programs fall apart.
The bigger AI in Defense & National Security theme is showing up in real time: public legitimacy is now part of your security stack. If your AI security strategy can protect users while respecting boundaries, you’ll move faster, comply more easily, and get adoption without coercion.
Where do we go from here? As governments and enterprises push for stronger mobile defenses in 2026, the solutions that win won’t be the ones that collect the most data—they’ll be the ones that can prove they don’t need to.