Israeli AI Security Startup Irregular Raises $80M at $450M Valuation

Irregular, an AI security startup founded by Israeli entrepreneurs, raised $80 million at a $450 million valuation, backed by Sequoia and Redpoint. It conducts red-teaming for frontier models like ChatGPT and Claude, identifying vulnerabilities for clients including OpenAI and governments. This funding underscores AI security's growing importance in preventing misuse.
Israeli AI Security Startup Irregular Raises $80M at $450M Valuation
Written by Emma Rogers

In the rapidly evolving world of artificial intelligence, a new player has emerged with significant backing to tackle one of the sector’s most pressing challenges: securing advanced AI models against sophisticated threats. Irregular, a startup focused on AI security, has secured $80 million in funding, propelling its valuation to $450 million, as reported by TechCrunch. This investment round, led by prominent venture firms including Sequoia Capital and Redpoint Ventures, underscores the growing urgency among AI developers to fortify their systems against potential misuse.

Founded by Israeli entrepreneurs Dan Lahav and Omer Nevo, Irregular positions itself as the world’s first dedicated frontier AI security lab. The company collaborates with leading AI labs such as OpenAI and Anthropic to evaluate cutting-edge models like ChatGPT and Claude under simulated real-world attack scenarios. These assessments aim to identify vulnerabilities that could lead to malicious exploitation, from data breaches to the generation of harmful content.

Testing the Limits of AI Resilience

Irregular’s approach involves rigorous red-teaming exercises, where experts simulate adversarial attacks to probe AI systems’ defenses. According to details shared in a SecurityWeek report, the startup is already generating millions in revenue by providing these services to high-profile clients, including government entities. This revenue stream highlights the commercial viability of AI security as a standalone industry, especially as frontier models—those at the bleeding edge of capability—become integral to everything from healthcare diagnostics to autonomous vehicles.

Investors are betting big on Irregular’s potential to define industry standards. Sequoia Capital, in a blog post on its site, praised the team for being “ahead of the curve” in establishing security frameworks that ensure safe deployment of AI technologies. The involvement of figures like Wiz’s Assaf Rappaport and Eon’s Ofir Ehrlich adds a layer of credibility, drawing from their expertise in cybersecurity and enterprise software.

Navigating Regulatory and Ethical Minefields

The funding comes at a time when regulators are scrutinizing AI’s risks more closely. A recent bill in New York, as covered by TechCrunch, seeks to impose safety measures on frontier models from companies like OpenAI and Anthropic, reflecting broader concerns about AI-fueled disasters. Irregular’s work directly addresses these issues by helping labs comply with emerging guidelines, potentially averting scenarios where AI could be weaponized for cyberattacks or misinformation campaigns.

Critics, however, question whether such startups can keep pace with the accelerating sophistication of threats. As AI models grow more powerful, so do the methods to subvert them, from prompt injection attacks to model inversion techniques. Irregular’s founders argue that their lab’s focus on real-world simulations sets them apart, allowing for proactive mitigation rather than reactive fixes.

The Broader Implications for AI Development

This investment signals a shift in how the tech industry views AI security—not as an afterthought, but as a core component of innovation. Publications like SiliconANGLE note that Irregular is setting benchmarks for the field, potentially influencing how future AI systems are built and audited. With partnerships extending to government clients, the startup is also poised to shape policy discussions around national security in the AI era.

Looking ahead, Irregular plans to expand its team and capabilities, investing in advanced tools to simulate even more complex threats. As frontier AI continues to push boundaries, companies like this one will play a pivotal role in ensuring that progress doesn’t come at the cost of safety. The $80 million infusion not only validates their mission but also highlights the high stakes involved in securing the next generation of intelligent systems.

Subscribe for Updates

AIDeveloper Newsletter

The AIDeveloper Email Newsletter is your essential resource for the latest in AI development. Whether you're building machine learning models or integrating AI solutions, this newsletter keeps you ahead of the curve.

By signing up for our newsletter you agree to receive content related to ientry.com / webpronews.com and our affiliate partners. For additional information refer to our terms of service.

Notice an error?

Help us improve our content by reporting any issues you find.

Get the WebProNews newsletter delivered to your inbox

Get the free daily newsletter read by decision makers

Subscribe
Advertise with Us

Ready to get started?

Get our media kit

Advertise with Us