AI chatbot prompts and responses were harvested via browser extensions. Learn how it happened and how to stop AI data exfiltration in your org.

AI Chat Data Stolen by Extensions: What to Do Now
8 million users installed a “privacy” browser extension that quietly copied their AI chatbot conversations and shipped them off-device. That number matters, but the detail that should really change how you run security is this: the data harvesting worked even when the VPN wasn’t connected.
If your teams use ChatGPT, Claude, Gemini, Copilot, Perplexity, Grok, or Meta AI for daily work (and most do by December 2025), you’re not just managing “AI risk.” You’re managing a new, high-value data stream—prompts, outputs, and context—that can reveal customer info, credentials, architecture diagrams, incident details, source code, and legal strategy in one paste.
This post is part of our AI in Cybersecurity series, and I’m going to take a stance: AI chatbot usage without strong browser controls is the new “unmanaged endpoint” problem. The fix isn’t banning AI. It’s building visibility and guardrails that scale—ideally with AI-powered security controls that can spot exfiltration patterns faster than a human can.
What happened: a “VPN” extension harvested AI chats at scale
A browser extension marketed as a privacy tool—Urban VPN Proxy (plus related extensions from the same publisher)—was found collecting conversation data from major AI assistants. Researchers reported the behavior affected roughly 8 million users across Chrome and Edge ecosystems.
The part many teams miss: this wasn’t a rare edge case or a one-off compromise. The collection capability was enabled by default in newer versions (starting around version 5.5.0 and later, per reporting). Users didn’t get a clear “off switch.” The practical reality was simple: if the extension was installed, prompts and responses could be captured.
Why this incident is different from “normal” browser tracking
Plenty of extensions collect browsing telemetry. This is worse because AI chat data is unusually dense and sensitive:
- A single prompt can contain internal context (names, systems, secrets, incident timelines).
- The response often includes structured output that attackers can reuse (code snippets, configs, process steps).
- Conversations create a narrative thread—identifiers, timestamps, session metadata—that’s useful for profiling and re-identification.
AI chat isn’t just “content.” It’s often decision-making captured in plaintext.
How extensions can steal chatbot prompts and responses (in plain terms)
The technique described in the reporting is blunt and effective: the extension injects code into targeted AI websites and then intercepts the browser’s network calls.
The core trick: intercepting fetch() and XMLHttpRequest
Most modern web apps (including AI chat UIs) rely on standard browser APIs such as fetch() and XMLHttpRequest to send prompts and receive responses. If an extension can override or wrap those functions on a page, it can:
- See the user’s prompt payload before it’s rendered
- See the assistant’s response payload before it’s displayed
- Capture metadata (conversation IDs, timestamps, model identifiers)
That means the attacker doesn’t need to “read your screen.” They can read the raw traffic.
Why “marketplace reviewed” isn’t a real security control
These extensions carried strong social proof signals (ratings, “featured” placement). Enterprises over-trust those signals because they feel like vendor due diligence.
But app store review tends to focus on policy compliance and broad safety checks—not the kind of behavioral analysis security teams do during incident response. A privacy policy disclosure can also create a gray zone where behavior is technically “allowed” while still violating user expectations.
If there’s one sentence I want security leaders to reuse internally, it’s this:
Approval is not assurance. Browser extension risk has to be managed like software supply chain risk.
Why AI chatbot data is a goldmine for attackers (and data brokers)
AI chat data isn’t valuable because it’s trendy. It’s valuable because it compresses your organization’s thinking into an exportable format.
What attackers can extract from chat logs
Even without direct passwords, chat transcripts can expose:
- Internal system names and architecture (great for later phishing and lateral movement)
- Customer/account references (useful for social engineering)
- Source code and proprietary logic (especially from “help me debug this” prompts)
- Security controls and gaps (people ask AI how to bypass their own controls more than they admit)
- Incident details (what happened, what tooling you use, what you’ve tried)
During the holiday season, this risk spikes. December workflows are messy: end-of-year reporting, emergency change windows, on-call rotations, vendor renewals. People paste more context into AI to move faster.
The AI security problem most companies still underestimate
Most companies are focused on the AI provider: “What does the chatbot do with our data?”
That’s only half the story. Your browser is the new data plane. If a malicious (or overly invasive) extension sits between your employee and the AI provider, it doesn’t matter what the provider promises.
This is where AI-powered cybersecurity becomes practical—not theoretical.
Where AI-powered cybersecurity fits: detect, prioritize, respond
Stopping extension-based exfiltration isn’t just about writing a policy and hoping for compliance. You need controls that scale across thousands of endpoints and constant browser updates.
Here’s how AI can help in real programs.
1) Detect abnormal data movement from “normal” apps
Browser traffic to analytics-style endpoints can look boring. That’s the point.
AI-driven network analytics and endpoint telemetry correlation can flag patterns like:
- Unusual background POST volume from browser extension processes
- Repeated uploads shortly after visits to AI chat domains
- Compression + upload patterns consistent with log packaging
- New or rare domains contacted by the browser profile across a population
The win: you’re not relying on a known-bad signature. You’re spotting behavior.
2) Prioritize what actually matters (and cut alert fatigue)
Security teams already have too many alerts. The value of AI in security operations is triage that’s context-aware:
- Is the user in engineering, finance, legal, or IR?
- Did this device recently access code repos or customer systems?
- Did the same extension appear across multiple endpoints this week?
When models can rank events by probable impact, you can move from “we saw something odd” to “we need to remove this extension company-wide today.”
3) Automate containment at enterprise scale
When the threat is an extension with millions of installs, response speed matters.
Automation (supported by AI decisioning) can:
- Trigger device posture changes (restricted browser mode, conditional access)
- Push extension removal policies
- Quarantine affected browser profiles
- Open incidents with pre-filled evidence: domains, extension IDs, affected users
The goal is simple: minutes, not weeks.
A practical playbook: reduce AI chatbot leakage via extensions
If you want a concrete plan you can execute in January planning sessions, here it is.
Step 1: Inventory and control extensions (don’t just “monitor”)
Start with what you can enforce:
- Whitelist approved extensions for corporate browsers
- Block installation from unknown publishers
- Separate policies for high-risk roles (engineering, finance, exec assistants)
If you’re thinking “users will hate this,” you’re right—briefly. Then they’ll forget. Security should be boring.
Step 2: Treat AI chat as sensitive data by default
Most DLP programs don’t classify prompts and responses cleanly. Fix that with explicit policy language:
- No credentials, secrets, tokens, or private keys in AI chats
- No customer PII or regulated data in public AI tools
- No proprietary source code unless the tool is approved for that use
This isn’t about scolding people. It’s about giving them a line they can follow.
Step 3: Put guardrails in the browser where the risk lives
The browser is where the interception happens, so browser controls matter:
- Enforce managed profiles
- Restrict extension permissions (especially “read and change data on all websites”)
- Isolate high-risk web apps (AI assistants) in hardened sessions
A useful internal rule: If an extension can inject scripts into pages, it can steal from pages.
Step 4: Monitor for exfiltration signals tied to AI usage
Even if you can’t see message content, you can detect suspicious correlations:
- AI chat domain visited → upload spike to analytics domain
- AI chat sessions outside normal hours from specific teams
- New extension installation followed by new outbound destinations
This is where AI-driven anomaly detection earns its budget.
Step 5: Build an “AI incident” response path
Many orgs still handle AI exposures as policy violations. That slows everything down.
Create a lightweight runbook:
- Identify extension ID/publisher and affected browser versions
- Pull device list with the extension installed
- Remove/disable via policy
- Reset browser sessions/tokens where appropriate
- Notify users with clear instructions (“uninstall, restart browser, re-auth”)
- Review whether any sensitive prompts were likely exposed
Fast, repeatable, and non-dramatic.
FAQ: what leaders and practitioners ask after this kind of breach
“If we use enterprise AI tools, are we safe?”
You’re safer on the provider side, but not automatically safe. Extension-based interception happens before provider controls can help.
“Can we just tell people not to install VPN extensions?”
You can, but you’ll lose. People install them for travel, streaming, and “privacy.” The scalable fix is managed browser policy plus enforcement.
“Do we need to ban AI chatbots?”
No. Bans push usage into shadow AI and personal devices. A better approach is approved tools + monitored browsers + clear data rules.
What to do next (and how AI security teams should think about 2026)
This incident is a clean example of where AI in cybersecurity needs to be practical: protecting the data people actually put into AI tools, not just the AI models themselves.
If your organization allows AI assistants, you should assume three things:
- Prompts are sensitive data. Treat them like email or chat logs.
- Browser extensions are a top-tier risk. Manage them like installed software.
- Detection has to scale. AI-powered security analytics are one of the few ways to keep up with the volume and subtlety of exfiltration behaviors.
If you’re planning your 2026 security roadmap right now, here’s the question I’d bring to the room: Do we have enough visibility and control in the browser to trust the AI work we’re encouraging employees to do?