AI can reduce surveillance risk in national security apps using on-device detection, federated learning, and audit analytics—without weakening fraud protection.

AI Guardrails for Security Apps Without Mass Surveillance
India’s Department of Telecommunications (DoT) tried to do something many security teams secretly wish they could do: standardize protection at the device level by pushing a cybersecurity app onto every phone. The mandate didn’t last a week. After widespread backlash, the government rolled it back.
The uncomfortable truth is that the idea wasn’t purely bad. India is fighting an enormous volume of phone-enabled fraud, SIM abuse, spam, and device theft, and a single citizen-facing tool can genuinely help people protect themselves. The problem is that an undeletable, state-issued app—especially one paired with a national phone database—looks less like “security” and more like infrastructure for surveillance.
This post is part of our AI in Cybersecurity series, and I’m going to take a clear stance: governments (and enterprises) can improve mobile security at scale without forcing always-on monitoring. The path forward is a mix of privacy-by-design policy and AI-powered oversight that proves, rather than promises, that users are protected.
What India’s rollback really signals
A fast policy reversal usually means one thing: trust collapsed faster than adoption could grow. India’s app, Sanchar Saathi (“Communication Companion”), was designed to help citizens report stolen phones, flag unknown SIMs registered in their name, and respond to fraud and spam. According to official statements, the app’s users have already:
- Downloaded the app 14+ million times (since Jan. 2025)
- Deactivated 4.2 million lost/stolen devices
- Traced 2.6 million devices
- Retrieved 700,000 devices
- Disconnected 14 million mobile connections marked “Not My Number”
- Blocked 600,000 IMEIs linked to fraud
- Prevented estimated losses of 4.75 billion rupees (about $53 million)
Even if you treat those numbers as directional, the message is clear: a centralized anti-fraud workflow can work.
So why did the mandate fail? Because the order didn’t just ask for pre-installation. It reportedly required:
- Pre-installation on new devices
- Retroactive installation on existing devices
- Visibility and accessibility
- No ability for users to disable or restrict it
That last point is where security arguments run out of road. When users can’t turn something off, the security conversation becomes a governance conversation.
The core problem: the same data helps security and surveillance
A national repository of device identifiers (like IMEI), SIM associations, and user reports is powerful. It can stop black-market resale of stolen phones and interrupt SIM swap fraud. It can also be used to map identities, locations, relationships, and behavior—especially if combined with telecom metadata.
This is why public reaction tends to be binary: people either see “protection” or “spying,” and mandates force them into the worst possible version of that debate.
Security outcomes aren’t enough. Citizens and customers want enforceable limits.
Where AI fits: scale security and reduce collection
Here’s the better approach: use AI to minimize what you collect, prove what you’re doing, and detect abuse of the system itself.
AI in cybersecurity is often marketed as faster detection. The more important use case in sensitive, citizen-scale programs is reducing the need for intrusive telemetry while still catching fraud.
1) On-device AI for fraud detection (privacy-preserving by default)
The cleanest model is simple: detect locally, share minimally.
Instead of routing every signal to a central server, modern mobile architectures can run lightweight models on the device to spot patterns like:
- Smishing language and malicious URLs
- Suspicious call patterns and caller ID inconsistencies
- Overlay attacks and accessibility-service abuse
- App behavior that resembles credential harvesting
When the model flags something, you can send a narrow, event-based report (for example: “this URL is associated with a known phishing cluster”) rather than continuous logs.
A useful rule I’ve found: if the system requires always-on collection to be effective, it’s not a security product—it’s a monitoring product.
2) Federated learning to improve models without centralizing raw data
If you want nationwide improvement in detection quality, you don’t need to centralize everyone’s data.
With federated learning, phones train model updates locally and share only aggregated gradients (and even those can be protected with secure aggregation). The central system learns from population-scale signals without ingesting individual message contents, call history, or detailed device activity.
This matters in any government-mandated (or enterprise-mandated) security app because it creates a credible privacy story:
- Better detection over time
- Less sensitive data leaving the device
- Clear boundaries between “protection” and “profiling”
3) AI to detect misuse of the cybersecurity app itself
A state app can become a high-value target. Attackers (or insiders) may try to:
- Abuse administrative interfaces
- Query the underlying database for tracking
- Target activists, journalists, executives, or political opponents
- Use the app as a privileged foothold
This is where AI-based anomaly detection helps in a way most people don’t talk about: AI can monitor the monitors.
Examples of misuse signals that AI can catch quickly:
- Unusual lookup volumes by a specific operator account
- Searches clustered around high-profile individuals
- Repeated queries without corresponding fraud reports
- Access attempts outside standard time/location patterns
Done right, these detections feed an oversight workflow with immutable audit trails and human review.
The “undeletable app” is a governance failure, not a UX failure
Security teams sometimes mistake pushback for misunderstanding. That’s not what happened here.
The backlash came from a rational place: people have seen advanced spyware scandals and broad surveillance powers in many countries. When the same entity that can compel telecom cooperation also wants an unremovable app on every device, “trust us” isn’t a control.
What would have worked better than a mandate
If the goal is high adoption without social backlash, the winning pattern looks like this:
- Opt-in by default, not forced: users choose, and adoption is earned
- Uninstall allowed (or at least disable + revoke permissions)
- Permission minimization: only ask for what the feature needs
- Clear “what we don’t do” language inside the app
- Independent security and privacy audits with published summaries
- Open technical transparency: documented data flows, retention, and access policies
Mandates create a strange incentive problem: they optimize for deployment metrics (installed base) rather than real security outcomes (fraud prevented with minimal harm).
A practical blueprint for “trusted national anti-fraud apps”
Answer first: the safest way to run a national cybersecurity app is to treat it like critical infrastructure—measurable controls, privacy guardrails, and continuous oversight—then use AI to reduce data collection while improving detection.
Below is a blueprint that also translates well to large enterprises rolling out mobile security controls (MDM/MAM, secure access, identity protections) across thousands of employees.
1) Build a data boundary: collect less than you think you need
Start with a written boundary that’s enforced technically:
- No continuous location tracking unless the user triggers a theft workflow
- No message content ingestion for anti-smishing (use on-device classification)
- Short retention windows for fraud reports unless required for investigation
- Separate identity data from behavioral event data where possible
If you can’t write the boundary clearly, the system is too broad.
2) Add “proof of restraint”: cryptographic audit + independent review
Trust comes from verifiability. A strong model includes:
- Immutable audit logs for every privileged query
- Dual-control approvals for sensitive searches
- Routine third-party audits of access patterns
- Public-facing transparency reports (even if summarized)
AI can strengthen this by automatically flagging audit anomalies, but governance has to exist first.
3) Use AI as a filter, not a vacuum
The point of AI isn’t to ingest everything. It’s to reduce noise and limit data movement.
Good AI design for citizen-scale security uses:
- On-device inference
- Event-based reporting
- Aggregation over identification
- Differential privacy where feasible
If the architecture centralizes raw data “because AI needs it,” that’s usually a design shortcut.
4) Establish a “citizen incident response” loop
One overlooked reason these programs succeed or fail is how quickly people get outcomes.
A credible loop looks like:
- User reports stolen device / unknown SIM / scam attempt
- System confirms actions taken (block IMEI, freeze SIM, warn others)
- User can track status (traced, recovered, closed)
- User can appeal or correct errors
AI can speed triage, cluster scams, and prioritize cases, but humans must remain accountable for irreversible actions.
People also ask: what’s the right balance between privacy and national security?
The balance is measurable. If a program can’t describe (and enforce) limits on data collection, access, and retention, it’s not balanced.
A workable standard for national anti-fraud systems is:
- Security value is specific (stolen phone blocking, SIM fraud prevention)
- Data use is narrow (only for those purposes)
- Oversight is external (not just the implementing agency)
- AI reduces centralization (on-device + aggregated learning)
Security and privacy aren’t enemies. Vague mandates are the enemy.
What this means for CISOs and product leaders (not just governments)
If you’re in an enterprise, you might think this is purely a public-sector story. It isn’t.
Companies are also deploying mandatory agents: endpoint security, mobile threat defense, “digital experience monitoring,” identity telemetry, and now AI copilots embedded across work devices. Employees will tolerate a lot—until they feel watched.
India’s rollback is a reminder that:
- Forced installation creates adversarial users
- Transparency beats policy PDFs
- AI increases both power and risk—so oversight must rise with it
If you’re selling or deploying AI-driven security controls, you’ll generate more trust (and fewer legal headaches) by designing for least privilege, minimal telemetry, and provable governance.
A better way forward: AI that earns consent
India’s Sanchar Saathi story is a case study in a modern reality: security programs collapse when they ask for blind trust at national scale. The rollback wasn’t a failure of cybersecurity ambition. It was a failure to align security with citizen control.
For this AI in Cybersecurity series, the lesson is straightforward: AI can protect people without building a surveillance machine—if it’s used to minimize data, detect abuse, and prove compliance. The teams that win in 2026 won’t be the ones collecting the most. They’ll be the ones collecting the least while still stopping fraud.
If you’re considering a mandatory security app—whether for a country, a telecom, or a workforce—ask one question before you ship: Can we demonstrate, technically, that the system can’t be repurposed for monitoring? If the honest answer is “no,” the backlash is predictable, and deserved.