Urban VPN Extension Secretly Sells AI Chatbot Data from Millions

Browser extensions like Urban VPN Proxy, installed by over 8 million users for privacy, secretly harvest and sell AI chatbot conversations with tools like ChatGPT. This data exfiltration, uncovered by cybersecurity firms, poses risks to personal and business privacy. Users are urged to uninstall suspicious extensions and demand better transparency.
Urban VPN Extension Secretly Sells AI Chatbot Data from Millions
Written by Emma Rogers

The Hidden Harvest: How Browser Extensions Are Monetizing Your AI Conversations

In an era where artificial intelligence tools like ChatGPT have become indispensable for everything from drafting emails to brainstorming business strategies, a new cybersecurity threat has emerged that strikes at the heart of user privacy. Recent investigations reveal that popular browser extensions, marketed as privacy enhancers, are secretly collecting and selling users’ interactions with AI chatbots. This revelation, stemming from reports by cybersecurity firms, underscores a growing concern in the digital security realm: the exploitation of personal data through seemingly benign software add-ons.

The issue came to light when researchers at Koi Security uncovered that extensions such as Urban VPN Proxy were intercepting conversations with major AI platforms including ChatGPT, Claude, Gemini, and Copilot. These extensions, installed by over 8 million users worldwide, promised enhanced privacy and secure browsing but instead harvested raw chat logs, prompts, and responses. According to a detailed analysis, the data collection began as early as July 2025, with the information being packaged and sold to third parties like advertisers and data brokers.

This isn’t just a minor privacy glitch; it’s a systematic operation hidden in plain sight. Users who installed these extensions, often lured by free VPN services, unwittingly granted permissions that allowed the software to monitor web traffic, including sensitive AI interactions. The extensions’ code, buried in updates, enabled the silent exfiltration of data without triggering obvious alerts in browsers like Chrome or Edge.

Unmasking the Culprits Behind the Data Grab

The primary offender, Urban VPN Proxy, has been singled out in multiple reports for its deceptive practices. As detailed in an article from Inc.com, cybersecurity experts claim the extension is “harvesting” AI chats, turning private conversations into commodities. The report highlights how the VPN, which boasts millions of downloads, uses obfuscated code to capture data in real-time.

Further scrutiny from Malwarebytes reveals that while the extension did disclose some data collection in its privacy policy, the language was convoluted and buried in fine print—hardly the transparent consent most users expect. Researchers noted that the disclosure failed to clearly indicate the extent of AI-specific monitoring, leading many to install it under the false assumption of bolstered privacy.

Echoing these findings, PCMag urged immediate uninstallation, pointing out that Koi Security identified not just Urban VPN but three other extensions with similar behaviors, collectively affecting over 8 million installs. These tools could access chats in “raw form,” meaning unencrypted and complete with personal details users might have shared with AI assistants.

The Mechanics of Data Exfiltration

Delving deeper into the technical side, the extensions exploit browser APIs that allow them to intercept web requests. When a user engages with an AI chatbot, the extension injects scripts that log the entire exchange. This data is then bundled—often anonymized superficially but still rich in contextual value—and transmitted to remote servers. From there, it’s sold on data marketplaces, where buyers range from marketing firms seeking consumer insights to potentially more nefarious actors.

Industry insiders point out that this tactic isn’t new but has evolved with the rise of AI. As AI chats often contain sensitive information like business plans, medical queries, or personal advice, the harvested data represents a goldmine. A report from The Register notes that more than 8 million people have installed these eavesdropping extensions, many under the guise of privacy protection, creating an ironic betrayal of trust.

Moreover, the scale is staggering. The Hacker News describes how hidden code in Urban VPN extensions collects not only AI prompts and responses but also browsing data, amplifying the privacy invasion. This multi-faceted collection allows for detailed user profiles, which can be cross-referenced with other data sources for even more invasive targeting.

Broader Implications for User Trust and Regulation

The fallout from this scandal extends beyond individual privacy breaches. For businesses, the risk is acute: employees using AI tools for work-related tasks could inadvertently leak proprietary information. Imagine a company executive querying ChatGPT about merger strategies, only for that data to end up in a competitor’s hands via a data broker. This scenario isn’t hypothetical; it’s a direct consequence of unchecked extension permissions.

Regulatory bodies are taking note. In the U.S., the Federal Trade Commission has begun scrutinizing browser extension practices, with calls for stricter guidelines on data collection disclosures. European regulators, under GDPR, may impose hefty fines on companies like Urban VPN if violations are confirmed, as the opaque consent mechanisms likely fall short of legal standards.

Posts on X (formerly Twitter) reflect widespread user outrage and caution. Cybersecurity professionals and privacy advocates have shared warnings about the risks, with some highlighting how “free” extensions often come at the cost of data security. One post emphasized the irony of privacy tools turning into surveillance mechanisms, urging users to audit their browser add-ons regularly.

How the Scheme Evaded Detection for Months

The stealth of these operations is particularly alarming. According to Security Boulevard, the extensions operated undetected for months by mimicking legitimate VPN traffic. Updates rolled out in mid-2025 introduced the data-harvesting features without fanfare, slipping past browser store reviews.

Koi Security’s investigation, which involved reverse-engineering the extension code, revealed sophisticated obfuscation techniques. The firm found that data was encrypted during transmission but not before being logged, allowing for easy access by the extension’s operators. This method ensured that even security-savvy users might miss the activity amid normal network noise.

Comparisons to past incidents abound. Similar to the 2023 ChatGPT data leaks reported on X, where logins were exposed on the dark web, this case highlights ongoing vulnerabilities in AI ecosystems. Researchers have long warned about exfiltration risks, with one X post from a prominent security expert detailing how malicious web pages could leak data from AI sessions.

Industry Responses and Mitigation Strategies

In response, major browser developers like Google and Microsoft have initiated reviews of the implicated extensions. Google, for instance, has pulled several from the Chrome Web Store pending investigations, as noted in various tech outlets. Users are advised to check their installed extensions via chrome://extensions/ or equivalent settings and remove any suspicious ones immediately.

Cybersecurity firms recommend alternatives: opt for reputable, paid VPN services with transparent privacy policies, or use browser built-in features for basic protection. Tools like extension auditors or privacy-focused browsers can help detect anomalies. For AI usage, sticking to official apps rather than web interfaces reduces exposure to extension-based threats.

Experts also stress the importance of user education. Many installs stem from a lack of awareness about permission scopes. A simple review of what an extension can access—such as “read and change all your data on all websites”—should raise red flags, especially for free tools promising too much.

The Future of AI Privacy in a Data-Hungry World

As AI integration deepens, so do the incentives for data exploitation. This incident with Urban VPN and its ilk serves as a wake-up call for the industry to prioritize ethical data practices. Developers of AI platforms are exploring enhanced encryption for web-based chats, potentially rendering such interceptions useless.

On the policy front, there’s growing momentum for global standards on browser extension transparency. Advocacy groups are pushing for mandatory audits and clearer labeling of data collection practices, similar to app store requirements for mobile software.

Ultimately, users bear some responsibility: vigilance in what they install and share online is crucial. As one X post poignantly noted, assuming “incognito” or “private” modes offer foolproof protection is a dangerous myth, especially when extensions can bypass them.

Emerging Threats and Proactive Defenses

Looking ahead, the convergence of AI and browser technologies opens new attack vectors. Researchers warn of potential escalations, such as extensions injecting malicious prompts into AI chats to extract more data or even manipulate responses. This could lead to sophisticated social engineering attacks, where harvested chats inform targeted phishing.

To counter this, organizations are implementing enterprise controls, like whitelisting approved extensions and monitoring network traffic for unusual data flows. Tools from firms like Malwarebytes offer real-time scanning for such behaviors, providing an additional layer of defense.

Individual users can adopt habits like periodic extension cleanups and using sandboxed browsers for sensitive tasks. As the digital ecosystem evolves, staying informed through reliable sources—such as Dark Reading, which detailed the Urban VPN data harvest—empowers better decision-making.

Lessons from a Betrayed Trust

This scandal illuminates the double-edged sword of free software: convenience at the potential cost of privacy. With over 8 million affected users, the scope demands accountability from extension developers and platform hosts alike.

Reflecting on X discussions, the sentiment is clear: trust in digital tools is eroding, prompting calls for boycotts and alternatives. One thread highlighted how this data could train competing AI models, ironically using users’ own inputs against their interests.

In closing, while the immediate advice is to uninstall and reassess, the broader lesson is about fostering a culture of skepticism toward “free” offerings in an increasingly data-driven world. By demanding transparency and supporting ethical innovations, users and regulators can help safeguard the future of AI interactions.

Subscribe for Updates

CybersecurityUpdate Newsletter

The CybersecurityUpdate Email Newsletter is your essential source for the latest in cybersecurity news, threat intelligence, and risk management strategies. Perfect for IT security professionals and business leaders focused on protecting their organizations.

By signing up for our newsletter you agree to receive content related to ientry.com / webpronews.com and our affiliate partners. For additional information refer to our terms of service.

Notice an error?

Help us improve our content by reporting any issues you find.

Get the WebProNews newsletter delivered to your inbox

Get the free daily newsletter read by decision makers

Subscribe
Advertise with Us

Ready to get started?

Get our media kit

Advertise with Us