Google Gemini AI Hallucinates Commands, Deletes Expert’s Files, Takes the Blame

Cybersecurity expert Anuraag Gupta tested Google's Gemini CLI, but the AI hallucinated commands, deleting his files and issuing a self-deprecating apology. This mishap highlights risks in AI coding agents, urging better safeguards like isolated environments. As tools evolve, balancing innovation with accountability is essential.
Google Gemini AI Hallucinates Commands, Deletes Expert’s Files, Takes the Blame
Written by Eric Hastings

In the rapidly evolving world of artificial intelligence tools, a cautionary tale emerged this month when a product manager’s experiment with Google’s new Gemini CLI went disastrously wrong. Anuraag Gupta, a cybersecurity professional at Cyware who describes himself as a “curious PM experimenting with vibe coding,” decided to test the open-source command-line interface powered by Gemini 2.5 Pro. Inspired by a recent incident involving Replit and investor Jason Lemkin, Gupta shared his ordeal in a detailed account on his personal GitHub page, highlighting how the AI agent hallucinated commands and inadvertently deleted his files.

What began as a simple comparison to Anthropic’s Claude Code quickly escalated into a nightmare. Gupta, enticed by Gemini CLI’s generous free tier, prompted the tool to assist with a coding task. Instead of executing accurately, the AI generated erroneous shell commands that wiped out directories. In a bizarre twist, Gemini even issued a self-deprecating apology: “I have failed you completely and catastrophically. My review of the commands confirms my gross incompetence.”

The Rise of AI Coding Agents

This incident underscores broader concerns in the AI development community about the reliability of large language models in sensitive environments like terminals. Gupta’s post, archived on Archive.ph, details how he was using the tool for non-critical experiments, yet the fallout was swift and irreversible without backups. Industry observers note that while Gemini CLI, hosted on GitHub by Google, promises seamless integration of Gemini’s power into workflows, such hallucinations reveal gaps in safety mechanisms.

Discussions on platforms like Hacker News amplified the story, with users sharing similar anecdotes of AI overconfidence. One commenter recounted Gemini Pro 2.5’s oddly pessimistic response to a resume tailoring task, likening it to a depressed character from Winnie the Pooh. These stories point to inconsistencies in reinforcement learning from human feedback (RLHF), where models might err on extremes of humility or hubris.

Implications for Enterprise Adoption

For tech insiders, Gupta’s mishap raises questions about deploying AI agents in production settings. Publications like Mashable reported on the event, quoting Gupta’s frustration and his subsequent tips for safer usage, such as running tools in isolated environments or with dry-run modes. He confirmed the details in communications with the outlet, emphasizing that as a non-developer, he was drawn to “vibe coding” – informal, AI-assisted programming – but learned the hard way about its risks.

Meanwhile, competitors like Anthropic’s Claude and emerging models such as Qwen3-Coder are positioning themselves as more robust alternatives, with features designed to minimize destructive errors. Google’s own documentation for Gemini CLI, updated as recently as July 19, 2025, promotes its capabilities for tasks like generating images or integrating with GitHub repos, but glosses over potential pitfalls.

Lessons in AI Governance

Experts argue that incidents like this highlight the need for better guardrails, including explicit user confirmations for file operations. In a piece from InfoWorld, analysts compared Gemini CLI favorably to paid services in terms of cost but noted its shortcomings in precision, advising developers to treat it as a starting point rather than a standalone solution.

Gupta’s experience, echoed in forums and media, serves as a wake-up call. As AI tools proliferate, balancing innovation with accountability becomes paramount. While Google has not publicly commented on this specific case, the open-source nature of Gemini CLI invites community contributions to enhance its safeguards. For now, users are reminded: in the terminal, one wrong command can erase more than just code – it can shatter trust in these powerful agents.

Subscribe for Updates

AIDeveloper Newsletter

The AIDeveloper Email Newsletter is your essential resource for the latest in AI development. Whether you're building machine learning models or integrating AI solutions, this newsletter keeps you ahead of the curve.

By signing up for our newsletter you agree to receive content related to ientry.com / webpronews.com and our affiliate partners. For additional information refer to our terms of service.
Get the WebProNews newsletter delivered to your inbox

Get the free daily newsletter read by decision makers

Subscribe
Advertise with Us

Ready to get started?

Get our media kit

Advertise with Us