Passive LiDAR Detection: How AI Spots Hidden Sensors

साइबर सुरक्षा में AIBy 3L3C

Passive LiDAR detection can spot when a phone’s depth sensor activates—often before a photo is taken. Learn how AI-ready sensor hardware enables scalable security.

LiDARInfrared SensorsHardware SecurityAI for CybersecuritySide-Channel AnalysisStartup Engineering
Share:

Passive LiDAR Detection: How AI Spots Hidden Sensors

Most teams obsess over capturing data—video, audio, telemetry. The smarter security move in 2025 is often the opposite: detecting when someone else’s device is sensing you.

A surprising example comes from a real hardware experiment: building a small, low-power device that can passively detect the infrared LiDAR dot-grid emitted by certain smartphones (notably “Pro” models) when the camera app opens—before a photo is taken. That’s not a party trick. It’s a practical pattern for startups building AI-enabled security products: detect device behavior through side-channel signals, convert it into structured data, and let models classify what’s happening.

This post is part of our “साइबर सुरक्षा में AI” series, where we focus on how AI improves threat detection, prevention, and security operations. Passive LiDAR detection sits right at the intersection of sensor hardware innovation and AI-driven anomaly detection—and it’s an underrated product wedge for security-focused founders.

Why passive LiDAR detection matters for AI security

Passive LiDAR detection matters because it turns “invisible sensing” into an observable event stream that AI can analyze. If a device emits a known optical pattern (wavelength + timing + spatial distribution), you can detect that emission without interacting with the device at all.

Here’s the practical security framing:

  • Device behavior analysis: “Camera opened” and “depth sensor active” become measurable signals.
  • Zero-permission monitoring: no app install, no pairing, no endpoint agent.
  • Scalable AI telemetry: repeated observations become training data for classifiers and anomaly detectors.

This matters in real environments—workplaces, exam centers, R&D labs, maker spaces, events—where policy says “no recording” but enforcement is hard. Cameras are visible; sensing is not. Passive detection flips that asymmetry.

A strong stance: if your security controls rely only on policy and signage, you don’t have security—you have hope. Passive sensing gives you evidence.

What you’re detecting: the iPhone-style LiDAR signature

The core idea is simple: some phones emit a 940nm infrared LiDAR dot grid that flashes at predictable rates when certain apps activate depth features. You don’t need to see the image. You just need to see the light.

From the underlying research and teardowns of smartphone depth systems, the useful engineering constraints look like this:

  • Wavelength: ~940nm infrared (invisible to the human eye)
  • Temporal pattern: a prominent 60 Hz component (with additional structure and harmonics)
  • Spatial pattern: a lattice / dot-grid projected into the scene
  • Activation behavior: can turn on when the default camera app starts, not only when a photo is captured

That last point is the security punchline: if your goal is “detect capture attempts,” you might be late. If your goal is “detect sensor activation,” you can be early.

“But won’t everything trigger false positives?”

Yes—unless you design around it.

Displays, lighting, and consumer electronics produce tons of periodic signals at 30/60/120 Hz. If you build a naive light sensor and look for 60 Hz, you’ll end up detecting the room—not the LiDAR.

So the detection problem becomes: find 60 Hz at 940nm and then add enough structure to distinguish a dot-grid source from ambient flicker.

Building the sensor stack: from photons to features

A passive LiDAR detector is a classic signal pipeline: light → analog → digital → features → decision. Where startups often stumble is skipping “features” and jumping straight to “AI.” Don’t. Good features make models cheaper, faster, and more reliable.

Step 1: Sensing 940nm IR (photodiodes beat clever hacks)

The experiment behind the RSS post tried multiple approaches:

  • LEDs wired as photodiodes (surprisingly usable)
  • PIN silicon photodiodes (with and without bandpass filtering)
  • 940nm peak photodiodes (cleanest signal)

The practical takeaway: choose sensors that are naturally selective. A 940nm peak photodiode reduces the burden on downstream filtering and lowers false positives.

If you’re productizing, this isn’t just performance—it’s compliance and support cost. Every false alert becomes a customer support ticket.

Step 2: Conditioning the signal (Schmitt trigger vs op-amp)

You need to convert tiny analog current changes into a stable digital representation. Two common paths:

  • Fast op-amp amplification and thresholding
  • Schmitt trigger conditioning (hysteresis reduces chatter)

In the referenced build, the Schmitt trigger approach performed comparably to the op-amp version while keeping the design simpler.

My rule: prefer the simplest analog front-end that preserves the timing you care about. Complexity belongs in software only when it truly buys accuracy.

Step 3: Sampling + timing analysis on a low-power MCU

To detect a 60 Hz signal (and harmonics), you don’t need a powerhouse CPU, but you do need:

  • stable timing
  • fast interrupts or sampling loops
  • enough headroom to compute features across multiple sensors

A common choice class is a low-power microcontroller around tens of MHz (the experiment used a 48 MHz MCU). For a startup, this matters because it enables:

  • battery-powered deployments
  • small form factors
  • low BOM cost

And that’s where scale comes from.

The real trick: multi-sensor correlation (spatial features)

The most important insight is that one sensor isn’t enough. A LiDAR dot grid isn’t a single beam. It’s many beams spread across space. That spatial structure creates a signal pattern that ambient IR noise usually can’t mimic.

So you add multiple discrete photodiodes, each looking at slightly different parts of the scene. Now you can compute features like:

  • coincidence: how many sensors see the signal at the same time?
  • partial detection: some sensors detect it while others don’t (typical of a dot grid)
  • phase relationships: timing offsets between sensors
  • burst vs steady behavior: whether the 60 Hz component is continuous or enveloped

This is a bridge point directly into AI:

When you add spatial correlation, you stop “detecting light” and start “detecting behavior.”

Why this is startup-friendly

Multi-sensor correlation is a great fit for founders because it gives you a defensible data moat without needing massive datasets.

  • You can start with rule-based detection (fast time-to-market).
  • You can log feature vectors (privacy-preserving).
  • You can train lightweight models later (incremental improvement).

That’s a healthier path than shipping an “AI model” on day one that you can’t explain or debug.

Where AI fits: from heuristics to robust classification

AI should sit on top of a feature pipeline, not replace it. In passive sensing, the environment is messy—reflections, sunlight, IR-rich spaces, screens, and motion.

A practical AI roadmap looks like this:

Phase 1: Deterministic detection (ship something reliable)

Start with measurable thresholds:

  • energy in the 60 Hz band at 940nm
  • minimum number of sensors involved
  • consistency over N windows (e.g., 500 ms to 2 s)

This gets you a stable MVP.

Phase 2: Supervised classification (reduce false alarms)

Collect labeled scenarios:

  • phone camera open (target)
  • phone present but idle
  • room with large displays
  • sunlight near windows
  • IR remote usage
  • multiple phones

Then train a small classifier on features, not raw waveforms:

  • logistic regression / gradient boosting
  • tiny 1D CNN if you really need it

Phase 3: Anomaly detection (handle unknown environments)

For real deployments, “unknown unknowns” dominate. Add:

  • one-class models
  • drift detection
  • environment-specific baselines (per room / per site)

In “साइबर सुरक्षा में AI” terms, this is the same playbook as SOC analytics:

  • baseline normal
  • detect deviation
  • reduce analyst fatigue with confidence scoring

Product use cases startups can actually sell in 2025

Passive LiDAR detection becomes valuable when it’s packaged as a clear outcome, not a cool sensor. Here are concrete product angles that map to budgets and buyers.

1) No-camera zones with audit trails

Factories, labs, and design studios often ban cameras. Enforcement usually relies on guards.

A passive detector can provide:

  • real-time alerts when depth sensing activates
  • timestamps and location-based logs
  • integration into access control workflows

2) Exam and assessment integrity

Cheating prevention is moving beyond metal detectors. Modern devices can scan, map, and assist.

Passive sensing can support:

  • pre-check screening of “camera activated” behavior
  • continuous monitoring without capturing images

3) Executive privacy in meeting rooms

A compact sensor near a boardroom display can detect when a phone is actively sensing the room.

The key is messaging: you’re not recording anyone; you’re detecting sensor activation.

4) Security R&D: side-channel detection as a platform

Once you build the pipeline for optical side channels, you can generalize:

  • IR beacons
  • covert illumination
  • modulated laser sources

That’s a platform story investors understand: one core capability, many markets.

Implementation checklist: what to get right early

If you’re turning this into a startup prototype, prioritize decisions that reduce false positives and make data collection easy.

  1. Sensor selectivity first: start with 940nm peak photodiodes or add a narrow bandpass filter.
  2. Multiple sensors by design: 3–8 sensors is a practical early range for spatial correlation.
  3. Time windows that match behavior: detect over 0.5–2 seconds, not single cycles.
  4. Feature logging over raw logging: store frequency energy, coincidence counts, and confidence scores.
  5. Calibration mode: every environment has different IR noise. Add a quick “learn baseline” step.
  6. Threat model clarity: are you detecting camera open, depth scan, or any IR emission? Don’t mix them.

A reliable detector with a clear threat model will beat a fancy model with fuzzy claims every time.

The uncomfortable questions (and why they help you build responsibly)

Passive detection raises predictable questions. Address them upfront if you want enterprise buyers.

“Is this surveillance?”

It can be, if you design it badly. A responsible design stance:

  • don’t store audio/video
  • don’t identify individuals
  • store minimal event metadata
  • document what you detect and what you don’t

“Can attackers evade it?”

Yes. They can block the LiDAR emitter, change angles, or use devices without depth sensing.

That’s fine—security is layered. Passive LiDAR detection is one control, not the whole system.

“Will it work everywhere?”

Not perfectly. Sunlight and IR-heavy environments can be brutal. That’s exactly why AI-based filtering and site baselining matter in production.

Where this goes next in “साइबर सुरक्षा में AI”

Passive LiDAR detection is a clean example of a bigger theme in AI security: turn messy real-world signals into reliable machine-readable evidence. Once you have that evidence, you can automate decisions, escalate only high-confidence events, and continuously improve using feedback loops.

If you’re building in the startup and innovation ecosystem, this is a strong wedge: it’s tangible hardware, a clear problem, and a path to AI differentiation that doesn’t depend on scraping the internet for data.

The forward-looking question to sit with: as phones, glasses, and wearables add more “always-on” sensors, what other emissions will become the next passive security telemetry?