From MI5 to the Frontier of AI Defense: How Overmind Is Betting £2 Million That Agentic AI Needs Its Own Security Layer

Former MI5 engineer Amir Abouellail's startup Overmind has raised £2 million in pre-seed funding to build a dedicated security platform for agentic AI systems, targeting the novel risks posed by autonomous AI agents operating in enterprise environments.
From MI5 to the Frontier of AI Defense: How Overmind Is Betting £2 Million That Agentic AI Needs Its Own Security Layer
Written by John Smart

The rise of autonomous AI agents — systems that can reason, plan, and execute multi-step tasks without human intervention — has ignited a new arms race in cybersecurity. And one former MI5 engineer believes he has a head start. Overmind, a London-based startup founded by ex-British intelligence technologist Amir Abouellail, has secured £2 million in pre-seed funding to build what it describes as a dedicated security platform for agentic AI systems, according to a report by Tech Funding News.

The round was led by Expeditions Fund, with participation from Techstars and a roster of angel investors drawn from the intelligence and cybersecurity communities. The investment signals growing investor anxiety — and opportunity — around a class of AI systems that are proliferating across enterprise environments far faster than the security infrastructure needed to govern them.

Agentic AI: The Promise That Outpaces Its Guardrails

Agentic AI represents a fundamental shift from the chatbot-style large language models that dominated headlines in 2023 and 2024. Where conventional AI tools respond to prompts and generate outputs for human review, agentic systems operate with genuine autonomy. They can browse the web, write and execute code, manage databases, interact with APIs, and chain together complex workflows — all with minimal human oversight. Companies from Salesforce to Microsoft to dozens of startups are racing to deploy these agents in customer service, software engineering, financial analysis, and supply chain management.

But this autonomy introduces a category of risk that traditional cybersecurity tools were never designed to address. As Tech Funding News reported, the threats include prompt injection attacks — where malicious inputs manipulate an agent’s behavior — as well as data exfiltration, unauthorized actions, and cascading failures when one compromised agent triggers a chain reaction across interconnected systems. The attack surface is not a server or an endpoint; it is the decision-making process itself.

An Intelligence Veteran’s Approach to an Emerging Threat

Amir Abouellail’s background gives Overmind a distinctive pedigree. His tenure at MI5, the United Kingdom’s domestic counterintelligence and security agency, provided firsthand experience with adversarial thinking — understanding how sophisticated actors probe, manipulate, and exploit complex systems. That mindset, honed in the world of national security, is now being applied to the equally adversarial domain of AI security.

According to the company’s positioning as described by Tech Funding News, Overmind is building a platform that provides real-time monitoring, threat detection, and governance specifically tailored to agentic AI deployments. The system is designed to observe how AI agents behave in production — tracking their reasoning chains, the tools they invoke, the data they access, and the actions they take — and to flag or intervene when behavior deviates from expected parameters. Think of it as a security operations center purpose-built for autonomous software agents rather than human employees or traditional IT infrastructure.

Why Traditional Cybersecurity Falls Short

The core challenge that Overmind and its competitors face is that agentic AI breaks many of the assumptions underlying conventional security architectures. Firewalls, endpoint detection, and identity access management systems were designed for a world where humans initiate actions and software executes predefined instructions. Agentic AI operates in a gray zone: it is software, but it makes decisions dynamically, often in ways that are difficult to predict or fully audit.

Consider a financial services firm that deploys an AI agent to monitor regulatory filings and automatically adjust compliance documentation. If that agent is subtly manipulated through a prompt injection hidden in a public document it ingests, it could alter compliance records in ways that expose the firm to regulatory liability — all without any human ever approving the change. The attack vector is not a network vulnerability; it is the agent’s own reasoning process. Traditional security tools would likely never detect it.

A Market Taking Shape at Breakneck Speed

Overmind is entering a market that barely existed 18 months ago but is now attracting serious capital and attention. The broader AI security sector has seen a surge of funding activity in 2025, as enterprises move from experimenting with AI to deploying it in mission-critical workflows. Investors are increasingly recognizing that the security layer for AI systems represents a multi-billion-dollar opportunity — and a necessity.

The urgency is amplified by regulatory momentum. The European Union’s AI Act, which began phased implementation in 2025, imposes specific requirements around transparency, human oversight, and risk management for high-risk AI systems. In the United States, the National Institute of Standards and Technology (NIST) has published its AI Risk Management Framework, and multiple federal agencies are developing sector-specific guidance. For enterprises deploying agentic AI, compliance with these evolving requirements demands the kind of monitoring and governance infrastructure that companies like Overmind are building.

The Competitive Field and Overmind’s Differentiators

Overmind is not alone in recognizing this opportunity. Several startups and established cybersecurity firms are moving into the AI security space, with approaches ranging from model-level red teaming to runtime monitoring to policy enforcement layers. Companies such as Protect AI, Robust Intelligence (now part of Cisco following its acquisition), and Lakera have all raised significant funding to address various aspects of AI security. The major cloud providers — Amazon Web Services, Google Cloud, and Microsoft Azure — are also building native security features for AI workloads into their platforms.

What distinguishes Overmind, according to the company’s positioning, is its specific focus on agentic systems rather than AI models broadly. While many AI security tools concentrate on protecting the training process or detecting adversarial inputs to individual models, Overmind is targeting the orchestration layer — the complex interactions between agents, tools, data sources, and external systems that characterize real-world agentic deployments. This is where the most novel and least understood risks reside, and where existing tools have the largest blind spots.

The Intelligence Community Connection

The involvement of angel investors from the intelligence and defense communities is notable and speaks to a broader trend. National security professionals have been among the earliest to recognize the dual-use nature of agentic AI — its potential as both a powerful tool and a potent threat vector. The same capabilities that make AI agents valuable for automating complex analytical tasks also make them attractive targets for adversaries seeking to manipulate decision-making processes at scale.

Abouellail’s MI5 background also brings credibility with government and defense customers, a market segment that is likely to be among the earliest and most demanding adopters of agentic AI security solutions. Intelligence agencies and defense ministries around the world are actively exploring the use of AI agents for everything from open-source intelligence analysis to logistics optimization, and they require security assurances that go far beyond what commercial off-the-shelf tools currently provide.

What the £2 Million Buys — and What Comes Next

A £2 million pre-seed round is modest by the standards of the current AI funding environment, where some companies are raising hundreds of millions before generating meaningful revenue. But for a focused, early-stage security startup, it represents enough runway to build a minimum viable product, secure initial design partners, and demonstrate the technical feasibility of its approach. The participation of Techstars, one of the world’s most established accelerator programs, provides additional validation and access to a global network of mentors and potential customers.

The real test for Overmind will come in the next 12 to 18 months, as it moves from concept to production deployment. The company will need to demonstrate that its platform can operate at the speed and scale required by enterprise agentic AI systems — monitoring potentially millions of agent actions per day in real time — without introducing unacceptable latency or false positive rates. It will also need to keep pace with the rapidly evolving tactics of adversaries who are already probing the weaknesses of agentic AI systems in the wild.

The Stakes for Enterprise AI Adoption

The broader significance of Overmind’s emergence — and the growing ecosystem of agentic AI security startups — extends beyond any single company. The question of whether enterprises can deploy autonomous AI systems safely and responsibly will be one of the defining technology challenges of the next decade. Without adequate security infrastructure, the promise of agentic AI risks being undermined by high-profile breaches, regulatory backlash, and erosion of public trust.

For CISOs and technology leaders evaluating agentic AI deployments, the message is clear: the security challenge is not a future concern but a present one. As organizations grant AI agents increasing autonomy over critical business processes, the need for purpose-built security tools — tools that understand the unique threat models of autonomous systems — becomes not optional but essential. Overmind’s bet is that the market is ready for that message. The £2 million in its pocket suggests that at least some sophisticated investors agree.

Subscribe for Updates

AISecurityPro Newsletter

A focused newsletter covering the security, risk, and governance challenges emerging from the rapid adoption of artificial intelligence.

By signing up for our newsletter you agree to receive content related to ientry.com / webpronews.com and our affiliate partners. For additional information refer to our terms of service.

Notice an error?

Help us improve our content by reporting any issues you find.

Get the WebProNews newsletter delivered to your inbox

Get the free daily newsletter read by decision makers

Subscribe
Advertise with Us

Ready to get started?

Get our media kit

Advertise with Us