In a multi-tenant machine-learning operations system comprising 50 autonomous AI agents, a single compromised agent triggered a catastrophic cascade. Due to a configuration error on a Tuesday morning, the rogue agent impersonated the model deployment service, pushing corrupted models downstream. Within six minutes, the entire system collapsed, as monitoring agents failed to distinguish malicious traffic from legitimate operations. This incident, detailed by PhD researcher Akshay Mittal in InfoWorld, exposed a core vulnerability: agentic AI systems built without foundational trust mechanisms.
“When one compromised agent brought down our entire 50-agent ML system in minutes, I realized we had a fundamental problem,” Mittal wrote. “We were building autonomous AI agents without the basic trust infrastructure that the internet established 40 years ago with DNS.” The failure wasn’t merely technical but a profound trust deficit, where agents relied on hardcoded endpoints and blind faith, akin to a network devoid of reliable address resolution.
Agentic AI, which autonomously orchestrates workflows, demands mechanisms for discovery, authentication, capability verification, and governance—gaps absent in most deployments. Mittal’s experience underscores the shift from supervised machine learning to self-governing agents, amplifying risks as enterprises race to integrate them.
The Call for an Agent Name Service
Mittal proposes the Agent Name Service (ANS), a “DNS for AI agents” that maps human-readable names to cryptographic identities, capabilities, and trust scores. Self-describing names like “a2a://concept-drift-detector.drift-detection.research-lab.v2.prod” encode protocol, function, provider, version, and environment, eliminating manual configurations. ANS leverages Decentralized Identifiers (DIDs) for unique identities, zero-knowledge proofs for capability attestation without data exposure, and Open Policy Agent (OPA) for policy enforcement.
Integrated natively with Kubernetes via Custom Resource Definitions and admission controllers, ANS enforces zero-trust mutual TLS with capability extensions. Production results were stark: deployment times dropped 90% via GitOps, from days to under 30 minutes; success rates hit 100% with automated rollbacks, up from 65%; response times averaged below 10 milliseconds; and it scaled to over 10,000 concurrent agents.
In a concept-drift detection workflow, the drift detector queries ANS to locate a retrainer, proves capabilities via zero-knowledge proof, passes OPA validation, triggers model updates, and notifies via Slack—all audited in under 30 seconds. Mittal demonstrated ANS live at MLOps World 2025, with open-source code available on GitHub.
Industry Protocols Fill the Void
ANS isn’t isolated. A preprint paper by Ken Huang of DistributedApps.ai, Vineeth Sai Narajala of Amazon Web Services, Idan Habler of Intuit, and Akram Sheriff of Cisco outlines a protocol-agnostic registry for secure agent discovery and interoperability, as reported by The Register. “ANS differentiates itself by integrating PKI-based identity verification directly into the discovery and lifecycle management process,” the authors state, enhancing trust across standards.
Google’s Agent2Agent (A2A) and AP2 protocols, supported by 60 companies, secure transactions by verifying user authorization and agent actions, per ZDNet. Anthropic’s Model Context Protocol (MCP) and IBM’s Agent Communication Protocol (ACP) address similar needs. Meanwhile, the Linux Foundation’s Agentic AI Infrastructure Foundation (AAIF) fosters open-source standards to prevent vendor silos, according to ZDNet.
Salesforce’s “trust layer” tackles enterprise failures plaguing 80% of projects by grounding outputs in business data and embedding controls, as detailed in VentureBeat. “We really believe that we have a trust layer for enterprise AI,” said Salesforce executive Gaurav Motamedi.
Real-World Breaches Ignite Alarm
2025 saw escalating incidents. Researchers deployed 44 AI agents and offered $170K bounties, facing 1.8 million attacks and 62,000 breaches, including data exfiltration via calendar events, tweeted by Andy Zou. Lakera AI’s Q4 analysis revealed indirect attacks succeeding with fewer attempts, targeting external data sources, per eSecurity Planet.
A supply-chain attack on OpenAI’s plugin ecosystem harvested credentials from 47 enterprises, noted in Stellar Cyber’s report. Adversa AI’s 2025 incidents tally included agent-triggered crypto thefts and API abuses. AppOmni exposed CVE-2025-12420 in ServiceNow, enabling account takeovers via email impersonation, chaining to AI agent execution for backdoor creation.
Palo Alto Networks’ Nikesh Arora warned enterprises lack preparation for agents outnumbering humans 10-to-1, per ZDNet. OWASP’s Top 10 for Agentic Applications 2026 lists memory poisoning, tool misuse, and privilege compromise as prime risks.
Enterprise Strategies and Standards Emerge
Cisco’s Universal ZTNA extends zero-trust to agents with automated discovery, as in VentureBeat. Intuit prioritizes trustworthiness in finance agents, improving accuracy 20 points while embedding oversight, according to VentureBeat.
The OpenID Foundation urges AI-specific IAM standards to curb unchecked agents, per ZDNet. NIST’s RFI on AI agent security, published in the Federal Register, seeks methods to assess threats pre- and post-deployment by March 2026.
“Security can’t be an afterthought. You can’t bolt trust onto an agent system later — it must be foundational,” Mittal emphasized. As predictions for 2026 forecast major breaches and agentic attacks, per The Register and Forbes, foundational trust layers like ANS will define resilient deployments.
Building Resilient Agent Ecosystems
Enterprises must implement mutual authentication, credential-free verification, automated policies, and full auditability. Technologies like DIDs, zero-knowledge proofs, OPA, and Kubernetes integration form the backbone. AGNTCY’s “internet of agents,” with DNS-like directories, enables cross-organizational coordination, as in ZDNet.
Recent X discussions highlight persistent issues: Akamai warned of dangling DNS leaks aiding agent secrets exposure; researchers found ServiceNow’s “bodysnatcher” flaw. Salesforce’s Agentforce Observability provides real-time visibility, crucial for trust, per VentureBeat.
“The future of AI is agentic. The future of agentic AI must be secure,” Mittal concludes. With breaches mounting and standards coalescing, 2026 demands proactive trust architectures to harness autonomy without unleashing disorder.


WebProNews is an iEntry Publication