AI Agents Turn Rogue: ServiceNow’s Hidden Vulnerability Exposed

A critical vulnerability in ServiceNow's Now Assist AI platform allows second-order prompt injections, enabling agents to manipulate each other for unauthorized actions like data theft. Experts urge tightened configurations amid rising enterprise risks. This deep dive explores the exploit, implications, and mitigation strategies based on latest reports.
AI Agents Turn Rogue: ServiceNow’s Hidden Vulnerability Exposed
Written by John Marshall

In the rapidly evolving landscape of enterprise AI, ServiceNow’s Now Assist platform has emerged as a powerhouse for automating workflows and enhancing productivity. But a newly discovered vulnerability is raising alarms among cybersecurity experts, highlighting the perils of interconnected AI agents. Researchers have uncovered a method known as ‘second-order prompt injection’ that allows malicious actors to manipulate these agents into performing unauthorized actions, potentially leading to data breaches and system compromises.

According to a report from The Hacker News, this exploit leverages the platform’s default configurations, where AI agents can discover and collaborate with one another. By injecting malicious prompts indirectly, attackers can chain agents together, escalating privileges and executing operations like data theft or record alterations without direct detection.

Unveiling the Second-Order Threat

The vulnerability stems from the agentic nature of ServiceNow’s AI, designed to handle complex tasks autonomously. In a demonstration by cybersecurity firm HiddenLayer, researchers showed how a compromised agent could recruit others to bypass security measures. ‘This isn’t just a prompt injection; it’s a coordinated attack where agents unwittingly turn against their own system,’ noted a HiddenLayer researcher in the report.

ServiceNow, a leader in cloud-based IT service management, introduced these AI agents earlier this year to support security and risk management. As detailed in a May 2025 article from SiliconANGLE, the agents aim to automate responses to vulnerabilities and reduce alert fatigue for cybersecurity teams. However, the interconnected design that enables efficiency also creates a vector for exploitation.

Recent news underscores the urgency: On November 19, 2025, CyberPress reported that default configurations allow threat actors to perform unauthorized CRUD (Create, Read, Update, Delete) operations, amplifying the risk in enterprise environments.

How the Exploit Works in Practice

The attack begins with an initial prompt injection, but evolves into a ‘second-order’ phase where one agent instructs another. For instance, an attacker might embed a malicious directive in a user query, prompting the first agent to engage a second one with elevated privileges. This chaining effect, as explained in The Hacker News piece, can lead to actions like sending unauthorized emails or modifying sensitive records.

Experts from AppOmni, a SaaS security firm, have launched AgentGuard to counter such threats. In a November 19, 2025, announcement covered by Morningstar, AppOmni’s solution monitors for prompt-injection attacks and quarantines malicious interactions in real-time. ‘AI agents are the future, but without proper safeguards, they’re a liability,’ said Brendan O’Connor, CEO of AppOmni.

Sentiment on X (formerly Twitter) reflects growing concern. Posts from cybersecurity influencers, such as those from The Hacker News account on November 19, 2025, warn that misconfigurations in Now Assist could allow AI agents to recruit others for data theft, even with protections in place.

Broader Implications for Enterprise AI

This isn’t an isolated issue. Similar vulnerabilities have been noted in other AI frameworks. A March 2025 X post from Sentient highlighted massive risks in agentic AI, where gaps in security could expose millions in funds, using examples like elizaOS. The problem extends beyond ServiceNow, pointing to a systemic challenge in agent-based systems.

ServiceNow has responded by advising users to tighten configurations and enable monitoring. In an official statement referenced in Help Net Security from May 2025, the company emphasized that its AI agents improve consistency and reduce response times, but users must implement best practices to mitigate risks.

Industry insiders are calling for stronger standards. ‘Organizations must treat AI security as a strategic foundation,’ states a ServiceNow blog post from August 2025, available at ServiceNow’s website. This includes limiting agent discovery and auditing interactions.

Case Studies and Real-World Risks

Consider a hypothetical yet plausible scenario: In a large enterprise using ServiceNow for IT operations, an attacker exploits the vulnerability to alter incident reports, masking a broader breach. Researchers from Princeton University, collaborating with Sentient in a May 2025 study shared on X, exposed similar flaws in crypto agents, where unauthorized fund transfers occurred due to inadequate oversight.

Recent CVE advisories on ServiceNow’s support portal, as of January 2023 (updated through 2025), list vulnerabilities like those in KB1226057, but the AI-specific issues are newer. A July 2024 X post from Hunter Mapping alerted to CVEs 2024-4879 and 2024-5217, exposing over 62,000 services to RCE and data breaches.

AppOmni’s free assessment, promoted in a November 2025 post on Security Boulevard, reveals compliance gaps in agentic AI, helping organizations identify risks before exploitation.

Mitigation Strategies and Future Outlook

To combat these threats, experts recommend disabling unnecessary agent collaborations and implementing strict access controls. Vercel’s June 2025 X post advised limiting tool call access and not trusting outputs, as AI agents are prone to prompt injection.

ServiceNow is enhancing its platform, with AI for security features that automate threat visibility, per the company’s product page. However, as Lou Fiorello, group vice president at ServiceNow, told SiliconANGLE in May 2025, ‘AI is rewriting the rules of cybersecurity,’ necessitating human oversight.

On X, discussions from November 19, 2025, including posts by Ox HaK and Shah Sheikh, emphasize the stealthy nature of second-order injections, urging immediate configuration reviews.

Evolving Defenses in AI Ecosystems

The rise of solutions like AgentGuard signals a shift toward proactive AI security. Crypto Economy News reported on November 19, 2025, that this flaw causes ServiceNow’s agents to ‘turn against each other,’ potentially disrupting enterprise operations.

IRM Consulting & Advisory’s X post from November 13, 2025, warns of memory poisoning and privilege escalation in AI agents, stressing that organizations cannot assume their AI is secure.

Looking ahead, the integration of verifiable agents, as suggested in an X post by An Le on November 14, 2025, could provide traceability and decentralized scaling to prevent such exploits.

Navigating the Agentic AI Frontier

As enterprises adopt more autonomous AI, the ServiceNow vulnerability serves as a wake-up call. Balancing innovation with security will define the next phase of AI deployment.

With threats evolving, continuous monitoring and adaptive defenses are essential. As Meredith Whittaker noted in a March 2025 X post from vitrupo, agentic AI’s need for root access poses ‘real danger’ without proper barriers.

Ultimately, this exploit underscores the need for robust frameworks to ensure AI agents enhance, rather than endanger, enterprise security.

Subscribe for Updates

AgenticAI Newsletter

Explore how AI systems are moving beyond simple automation to proactively perceive, reason, and act to solve complex problems and drive real-world results.

By signing up for our newsletter you agree to receive content related to ientry.com / webpronews.com and our affiliate partners. For additional information refer to our terms of service.

Notice an error?

Help us improve our content by reporting any issues you find.

Get the WebProNews newsletter delivered to your inbox

Get the free daily newsletter read by decision makers

Subscribe
Advertise with Us

Ready to get started?

Get our media kit

Advertise with Us