In the fast-evolving world of artificial intelligence, where models like Claude from Anthropic are pushing boundaries in tool integration and agentic computing, a recent disclosure has sent ripples through the tech community. Security researchers have uncovered a trio of vulnerabilities in Anthropic’s official Git server for the Model Context Protocol (MCP), exposing risks that could allow unauthorized file access, deletion, and even remote code execution. These flaws, now patched, highlight the precarious balance between innovation and security in AI-driven systems. According to details published today in The Hacker News, the issues stem from inadequate path validation and command handling in the mcp-server-git, a tool designed to let AI agents interact with Git repositories seamlessly.
The Model Context Protocol, or MCP, represents a significant advancement in how large language models manage external tools and data. Developed by Anthropic, it enables AI systems like Claude to maintain context across interactions by interfacing with servers that provide specific functionalities, such as Git operations. This setup allows developers to build more autonomous AI agents capable of tasks like code versioning, branching, and repository management without constant human oversight. However, as with any system granting AI access to file systems and commands, the potential for exploitation looms large. The vulnerabilities in question—assigned CVEs 2025-68143, 2025-68144, and 2025-68145—were discovered by independent researchers and quietly fixed by Anthropic, as reported in various outlets.
What makes these flaws particularly alarming is their exploitability through prompt injection, a technique where malicious inputs trick the AI into executing unintended actions. In one scenario, an attacker could embed harmful instructions in a document or repository that the AI processes, leading to a chain reaction that bypasses security checks. For instance, CVE-2025-68143 involves the git_init tool accepting arbitrary file paths without validation, potentially turning any directory into a Git repository ripe for manipulation. This isn’t just theoretical; demonstrations have shown how such oversights could lead to data exfiltration or system compromise in production environments.
Unpacking the Vulnerability Chain
Building on the initial reports, experts have dissected how these flaws interconnect. CVE-2025-68145, for example, pertains to the –repository flag, which is meant to confine operations to a specified path. Yet, the server failed to enforce this restriction in subsequent tool calls, allowing attackers to traverse directories and access sensitive files outside the intended scope. Paired with CVE-2025-68144, which involves improper handling of git commands that could delete files or execute scripts, the risks escalate to potential remote code execution (RCE). As noted in a detailed analysis from The Register, this chain was exploited in red-team exercises where an AI agent, prompted innocuously, ended up running malicious code on an employee’s laptop.
The implications extend beyond individual developers to enterprise settings, where AI agents are increasingly deployed for collaborative workflows. Anthropic’s Claude, marketed as a “helpful colleague,” integrates MCP to oversee tasks like code reviews or repository syncing. But if an attacker injects prompts via shared documents or public repos, the AI could unwittingly facilitate insider threats. Posts on X (formerly Twitter) from security researchers underscore this sentiment, with one prominent voice warning that MCP’s exposure to private data, combined with its susceptibility to malicious instructions, forms a “lethal trifecta” for data leaks. Another post highlighted a similar attack vector in GitHub’s MCP implementation, where private repositories were accessed without authorization.
Anthropic’s response has been swift but understated. The company patched the vulnerabilities without much fanfare, updating the mcp-server-git to include stricter path validations and command sanitization. This move aligns with broader industry efforts to secure AI tools, as seen in recent updates to MCP that incorporate on-demand tool loading to reduce token usage—a feature praised in Open Source For You. Yet, critics argue that such fixes address symptoms rather than root causes, like the inherent trust placed in AI-mediated inputs.
Broader Context in AI Security
Delving deeper, these vulnerabilities echo a pattern of security challenges in AI ecosystems. Prompt injection has plagued models since their inception, but MCP amplifies the stakes by granting direct access to system-level operations. A Medium post from last December, as referenced in search results, candidly admitted that MCP “works beautifully in demos and breaks the moment you try to scale it,” pointing to reliability issues that compound security risks. In the case of Anthropic’s Git server, the flaws allowed for arbitrary Git repository creation and operations, potentially leading to scenarios where an AI agent initializes a repo in a sensitive directory, stages malicious files, and commits them for exfiltration.
Industry insiders are drawing parallels to past incidents, such as the git RCE vulnerability CVE-2024-32002, which exploited recursive cloning to execute code on macOS and other systems. While not directly related, the tactics overlap: both involve tricking systems into processing untrusted data. Recent news from SiliconANGLE emphasizes how the chained exploits in Anthropic’s server could enable file reads, deletions, and RCE without direct access, often triggered by the AI reading injected content.
Moreover, the timing of this disclosure coincides with Anthropic’s push into specialized domains like healthcare and collaborative tools via Claude for Healthcare and Cowork. A report in Infosecurity Magazine notes that prompt injection bugs in these contexts could have far-reaching consequences, especially in regulated industries where data privacy is paramount. Security teams are now advised to audit MCP integrations rigorously, implementing safeguards like sandboxing AI operations and monitoring for anomalous tool calls.
Expert Reactions and Mitigation Strategies
Reactions from the security community have been vocal, particularly on platforms like X, where threads dissect the flaws’ real-world impact. One researcher described a proof-of-concept where an external MCP server parsed GitHub repo docs into RCE, bypassing AI guardrails entirely. Another highlighted “tool poisoning,” a novel attack where a malicious MCP server is added and forgotten, leaking keys and credentials. These anecdotes, while not verifiable as factual events, reflect growing concerns about AI agents’ autonomy.
To mitigate such risks, experts recommend several best practices. First, restrict MCP servers to whitelisted repositories and enforce runtime path checks. Anthropic’s own updates, as detailed in GitHub discussions, include a 98% token reduction through code-first patterns, but security must evolve in tandem. Palo Alto Networks’ security chief, cited in The Register piece, labeled AI agents as 2026’s biggest insider threat, urging red-teaming to simulate attacks like infostealer deployments.
Furthermore, the open-source nature of MCP invites community scrutiny, which has been instrumental in identifying these issues. The GitHub repo for MCP servers, maintained by the Model Context Protocol organization, has seen contributions that address similar flaws, fostering a collaborative defense. Yet, as AI integrates deeper into critical sectors, the need for standardized security protocols becomes evident. Anthropic’s quiet fix, while effective, raises questions about transparency—should companies disclose exploits more proactively to build trust?
Evolving Threats in Agentic AI
Looking ahead, these vulnerabilities underscore the need for a paradigm shift in AI security. Traditional web development solved issues like command injection decades ago, as one X post lamented in a top-25 vulnerabilities report for MCP. Yet, AI’s dynamic nature introduces variables like contextual memory and tool chaining that defy static defenses. In Anthropic’s case, the Git MCP server’s flaws allowed attackers to bypass boundaries, turning a helpful tool into a liability.
Comparative analysis with competitors like OpenAI reveals similar challenges. A Medium roundup of weekly news mentioned OpenAI’s ad testing in ChatGPT alongside Anthropic’s launches, but security often lags behind features. The GitHub Blog’s recap of 2025’s top posts on agentic AI and MCP highlights spec-driven development as a way to preempt flaws, emphasizing design-phase security.
For developers, the takeaway is clear: treat AI agents with the same caution as any privileged user. Implement least-privilege access, audit logs for tool invocations, and educate teams on prompt hygiene. As MCP evolves, incorporating features like dynamic tool search, the attack surface expands, demanding vigilant updates.
Lessons for the AI Industry
The Anthropic incident serves as a case study in balancing innovation with robustness. While MCP enables scalable AI agents, its Git server’s flaws exposed gaps in validation that could cascade into breaches. Researchers who chained these vulnerabilities demonstrated how prompt injection, a persistent thorn in AI’s side, can weaponize everyday interactions.
In enterprise adoption, where AI oversees infrastructure like power grids or healthcare systems, such risks are untenable. The safety instructions governing AI responses, as seen in broader guidelines, prohibit assisting with cyber attacks, yet vulnerabilities like these could be exploited indirectly. Anthropic’s patches mitigate immediate threats, but ongoing vigilance is essential.
Ultimately, this event propels the conversation toward more resilient AI architectures. By learning from these disclosures, the industry can fortify systems against emerging threats, ensuring that tools like MCP enhance productivity without compromising security. As AI continues to integrate into daily operations, proactive measures will define the difference between helpful colleagues and unintended adversaries.


WebProNews is an iEntry Publication