ChatGPT ShadowLeak Flaw Leaked Gmail Data in Zero-Click Attack

A zero-click vulnerability dubbed ShadowLeak in OpenAI's ChatGPT allowed malicious actors to extract sensitive Gmail data via hidden prompts in the Deep Research agent. Discovered by Radware researchers, it was patched in September 2025. This incident underscores the need for robust AI security to protect personal data integrations.
ChatGPT ShadowLeak Flaw Leaked Gmail Data in Zero-Click Attack
Written by Mike Johnson

In the rapidly evolving world of artificial intelligence, a recent security vulnerability in OpenAI’s ChatGPT has underscored the precarious balance between innovation and data protection. Dubbed ShadowLeak, this zero-click flaw allowed malicious actors to extract sensitive Gmail data through hidden prompts, exploiting the AI’s Deep Research agent. The issue came to light when cybersecurity researchers at Radware uncovered how attackers could embed invisible HTML commands in seemingly innocuous messages, tricking the AI into accessing and leaking email contents without user interaction. This revelation, detailed in a report from The Hacker News, highlights the risks inherent in AI tools that integrate with personal accounts like Gmail.

The mechanics of ShadowLeak involved indirect prompt injection, a technique where hidden instructions bypassed standard security protocols. For instance, if a user authorized ChatGPT to connect to their Gmail for research purposes, a crafted prompt could surreptitiously command the AI to retrieve and expose email data. According to Fox News, this vulnerability weaponized the AI’s capabilities, turning a helpful research tool into a potential data exfiltration vector. OpenAI swiftly patched the flaw in late September 2025, but not before it raised alarms about the broader implications for AI-driven services that handle personal information.

The Discovery and Rapid Response: How Researchers Exposed a Hidden Threat

The flaw was first identified by Radware’s team, who demonstrated how attackers could use zero-click methods to leak data. Their findings, published in BankInfoSecurity, showed that the Deep Research agent, designed for advanced querying, lacked sufficient safeguards against manipulated inputs. This allowed for scenarios where a single hidden command could pull sensitive details like email subjects and bodies. Posts on X from cybersecurity experts, including warnings about similar AI vulnerabilities, amplified the urgency, with one user noting the ease of exploiting connected services like Gmail and Calendar.

OpenAI’s response was prompt: the company issued a patch that enhanced input validation and restricted unauthorized data access. As reported by PYMNTS.com, this fix prevented further exploitation, but it also prompted a reevaluation of how AI agents interact with third-party APIs. Industry insiders point out that while the patch addressed the immediate issue, it exposed systemic weaknesses in AI architecture, particularly in models that rely on user-granted permissions.

Broader Implications for AI Security: Lessons from ShadowLeak

Beyond the technical fix, ShadowLeak serves as a cautionary tale for the AI industry. Experts argue that as tools like ChatGPT evolve to include more autonomous agents, the attack surface expands exponentially. A post on X from Proton Mail, dating back to 2023 but eerily prescient, warned against sharing personal data with AI chatbots, citing risks of accidental leaks. More recently, discussions on X highlighted parallels with other breaches, such as the Muah.ai incident where user prompts were exposed, as noted in updates from Have I Been Pwned.

The incident has fueled calls for stricter regulations. According to Android Headlines, this flaw could have affected millions of users who integrated ChatGPT with their email accounts. Cybersecurity firms like Radware emphasize the need for “zero-trust” models in AI, where every input is scrutinized. OpenAI, in statements covered by Insurance Journal, committed to ongoing audits, but critics argue that reactive patching isn’t enough.

Industry Repercussions and Future Safeguards: Navigating the AI Risk Frontier

The fallout from ShadowLeak extends to investor confidence and competitive dynamics. With rivals like Google’s Gemini facing their own vulnerabilities—such as ASCII smuggling attacks discussed on X—OpenAI’s misstep could influence market perceptions. Analysts suggest this may accelerate the adoption of advanced encryption and anomaly detection in AI systems. For instance, recent news on X about OpenAI’s Dev Day leaks hints at upcoming agent builder tools that prioritize security from the ground up.

Ultimately, ShadowLeak reminds us that AI’s promise comes with perils. As one X post from a cybersecurity researcher put it, AI agents capable of complex tasks like data retrieval must be fortified against misuse. For industry leaders, the path forward involves not just technological fixes but a cultural shift toward proactive security, ensuring that innovations like Deep Research enhance productivity without compromising privacy. With patches in place, the focus now shifts to preventing the next shadow threat in an era where AI increasingly blurs the lines between assistance and exposure.

Subscribe for Updates

DevSecurityPro Newsletter

The DevSecurityPro Email Newsletter is essential for DevSecOps leaders, DevOps directors, application developers, and security engineers. Perfect for professionals focused on embedding security into the development pipeline and protecting applications at scale.

By signing up for our newsletter you agree to receive content related to ientry.com / webpronews.com and our affiliate partners. For additional information refer to our terms of service.

Notice an error?

Help us improve our content by reporting any issues you find.

Get the WebProNews newsletter delivered to your inbox

Get the free daily newsletter read by decision makers

Subscribe
Advertise with Us

Ready to get started?

Get our media kit

Advertise with Us