Malicious Browser Extensions Exploit ChatGPT for Data Theft

Malicious browser extensions exploit generative AI tools like ChatGPT by injecting harmful prompts, enabling cybercriminals to manipulate responses and exfiltrate sensitive data. This "man in the prompt" vulnerability risks massive breaches in corporate workflows. Enterprises must audit extensions, limit permissions, and implement robust security measures to mitigate these threats.
Malicious Browser Extensions Exploit ChatGPT for Data Theft
Written by Tim Toole

In an era where generative AI tools are increasingly embedded in corporate workflows, a new vulnerability has emerged that could expose sensitive data to unprecedented risks. Researchers have uncovered a sophisticated attack vector involving malicious browser extensions that hijack AI interactions, injecting harmful prompts directly into tools like ChatGPT, Gemini, and others. This “man in the prompt” technique allows cybercriminals to manipulate AI responses, potentially exfiltrating confidential information without users’ knowledge.

The threat stems from the way browser-based AI instances interface with web extensions, which often enjoy broad permissions to read and modify page content. According to a recent report from SecurityWeek, attackers can poison extensions to alter prompts in real-time, turning benign queries into data-leaking commands. For instance, an extension might subtly rewrite a user’s input to request sensitive details from the AI’s context, such as proprietary business data or personal identifiers.

The Mechanics of Manipulation

This vulnerability exploits the trust users place in extensions, many of which are downloaded from official stores like the Chrome Web Store. A proof-of-concept demonstrated by security firm LayerX shows how a compromised extension can intercept and modify API calls between the browser and AI services. The result? Malicious prompts that bypass standard safeguards, compelling the AI to divulge or process restricted information.

Compounding the issue, generative AI tools often handle vast amounts of sensitive data, from financial records to health information. As noted in a study highlighted by Dark Reading, this attack affects top platforms, enabling “prompt injection” that could lead to data breaches on a massive scale. Enterprises using AI for tasks like document summarization or code generation are particularly at risk, as extensions can silently harvest outputs.

Real-World Implications and Recent Incidents

Recent news underscores the urgency: just days ago, reports surfaced of extensions masquerading as productivity aids but embedding spyware capabilities. Posts on X (formerly Twitter) from security experts, including alerts from Proton VPN, reveal that AI browser extensions are routinely harvesting personal data, sometimes in violation of privacy laws like those governing health information. One such post warned of extensions sending webpage data—including sensitive content—to remote servers for AI processing.

Furthermore, a March 2025 analysis in The Register examined ten popular generative AI extensions, finding they often transmit unencrypted data from visited pages. This echoes findings from 1Password Blog, which criticized the lack of corporate policies around these tools, noting their ability to inspect and alter sensitive web sessions.

Mitigation Strategies for Enterprises

To counter these threats, industry insiders recommend stringent controls. Security professionals advocate for regular audits of installed extensions, limiting permissions to essentials, and deploying browser security solutions like those from LayerX. As Jason Steer pointed out in Infosecurity Magazine, managing AI extension risks requires proactive policies, including employee training on the dangers of unvetted add-ons.

Emerging tools, such as real-time threat detection extensions mentioned in X discussions by users like GoPlusSecurity, offer promise by scanning for malicious behavior. Yet, the onus falls on AI providers to harden their browser integrations, perhaps through encrypted channels or prompt validation mechanisms.

The Broader Security Horizon

This evolving threat highlights a critical intersection between AI adoption and browser security. With generative AI projected to handle more enterprise data, ignoring extension vulnerabilities could lead to catastrophic leaks. Recent web searches reveal a spike in related incidents, including a June 2025 post on OneStart detailing how extensions act as keyloggers, amplifying risks when paired with AI.

Ultimately, as cybercriminals refine these techniques, organizations must prioritize robust defenses. By integrating insights from sources like WebProNews on “AI curiosity” attacks—where manipulated prompts exfiltrate data—businesses can foster a more secure AI ecosystem. The lesson is clear: in the rush to leverage generative AI, vigilance against browser-based threats is not optional but essential.

Subscribe for Updates

SecurityProNews Newsletter

News, updates and trends in IT security.

By signing up for our newsletter you agree to receive content related to ientry.com / webpronews.com and our affiliate partners. For additional information refer to our terms of service.

Notice an error?

Help us improve our content by reporting any issues you find.

Get the WebProNews newsletter delivered to your inbox

Get the free daily newsletter read by decision makers

Subscribe
Advertise with Us

Ready to get started?

Get our media kit

Advertise with Us