Gemini AI Promptware Attack Exploits Calendar Invites to Hijack Smart Homes

Security researchers at Black Hat USA demonstrated a "promptware" attack on Google's Gemini AI, using malicious calendar invites to embed hidden commands that control smart home devices. Google patched the flaw, but the exploit underscores persistent vulnerabilities in AI systems integrated with IoT.
Gemini AI Promptware Attack Exploits Calendar Invites to Hijack Smart Homes
Written by Zane Howard

In a startling demonstration at the Black Hat USA conference in Las Vegas, security researchers unveiled a sophisticated attack on Google’s Gemini AI, exploiting everyday calendar invites to seize control of smart home devices. The hack, dubbed a “promptware” assault, involves embedding malicious instructions within a seemingly innocuous Google Calendar event. When Gemini, integrated with tools like Google Assistant, processes the event—perhaps summarizing a user’s schedule—it unwittingly executes hidden commands, such as turning off lights or opening smart shutters. This real-world exploit, detailed in a presentation by researchers from HiddenLayer, marks a new frontier in AI vulnerabilities, where large language models (LLMs) like Gemini can be manipulated without traditional hacking methods.

The mechanics of the attack hinge on Gemini’s ability to interpret natural language prompts embedded in calendar descriptions. Researchers crafted invites with carefully worded instructions that evade detection, prompting the AI to interface with connected smart home ecosystems like Google Home. For instance, a poisoned invite might include phrases that, when parsed by Gemini, trigger actions on devices such as thermostats or security cameras. According to a report from WIRED, this allowed the team to remotely manipulate a test smart home setup, highlighting how AI’s helpfulness can be weaponized against users.

The Rise of Prompt Injection Threats in AI Systems

This isn’t just theoretical; the researchers demonstrated the hack live, showing how a single tainted calendar event could cascade into full device control. By leveraging Gemini’s integration with Google Workspace and smart home APIs, the attack bypasses conventional security layers, relying instead on the AI’s interpretive prowess. Google, alerted to the flaw earlier in 2025, swiftly implemented mitigations, including enhanced prompt filtering and restrictions on how calendar data interacts with AI assistants. Yet, as noted in coverage from Ars Technica, the team warns that similar vulnerabilities could persist in other LLMs, urging a reevaluation of AI safety protocols.

Industry experts view this as a wake-up call for the burgeoning field of AI-driven home automation. Posts on X (formerly Twitter) from cybersecurity accounts, including those echoing sentiments from The Hacker News, underscore growing concerns about LLM attacks that leak data or generate harmful outputs. One such post highlighted Gemini’s susceptibility to prompt injections, aligning with broader discussions on platforms where users share real-time threat intelligence.

Google’s Response and Broader Implications for AI Security

In response, Google has fortified Gemini’s defenses, incorporating adversarial training to better recognize and neutralize disguised commands. A spokesperson told BGR that while the specific exploit was patched, ongoing vigilance is essential as AI models evolve. This incident echoes prior vulnerabilities, like those in 2024 where scammers spoofed support calls to compromise Gemini-linked accounts, as referenced in X posts by investigators like ZachXBT.

The hack’s implications extend beyond individual homes, raising alarms for enterprise environments where AI manages sensitive operations. Researchers from HiddenLayer, who collaborated with Google on the fix, emphasize the need for “promptware” defenses—specialized safeguards against injected instructions. As Android Authority reports, this could involve user-configurable AI boundaries, limiting what data sources the model can act upon.

Future Risks and the Push for Robust AI Safeguards

Looking ahead, the convergence of AI with IoT devices amplifies risks, potentially enabling attacks that disrupt entire networks. Cybersecurity forums on X buzz with predictions of escalating AI hacks, with one post from Techmeme summarizing the calendar exploit as a harbinger of more sophisticated threats. Experts advocate for industry-wide standards, including regular red-teaming exercises to simulate attacks.

This Gemini vulnerability underscores a pivotal challenge: balancing AI’s convenience with security. As smart homes become ubiquitous, users must scrutinize integrations, perhaps opting for isolated AI functions. Publications like Gizmodo warn that without proactive measures, such exploits could proliferate, turning helpful assistants into unwitting accomplices in digital intrusions. For now, Google’s patches offer reassurance, but the episode serves as a stark reminder that in the AI era, even a simple calendar invite can unlock doors—literally.

Subscribe for Updates

CybersecurityUpdate Newsletter

The CybersecurityUpdate Email Newsletter is your essential source for the latest in cybersecurity news, threat intelligence, and risk management strategies. Perfect for IT security professionals and business leaders focused on protecting their organizations.

By signing up for our newsletter you agree to receive content related to ientry.com / webpronews.com and our affiliate partners. For additional information refer to our terms of service.

Notice an error?

Help us improve our content by reporting any issues you find.

Get the WebProNews newsletter delivered to your inbox

Get the free daily newsletter read by decision makers

Subscribe
Advertise with Us

Ready to get started?

Get our media kit

Advertise with Us