Veracode 2025 Report: AI Code Vulnerabilities Hit 45% of Tasks, Cut 60% with Oversight

Veracode's 2025 GenAI Code Security Report reveals that AI-generated code introduces security vulnerabilities in 45% of development tasks, with models failing on threats like XSS and injection attacks. As AI boosts productivity, risks escalate without safeguards. Integrating remediation tools and human oversight can reduce flaws by over 60%.
Veracode 2025 Report: AI Code Vulnerabilities Hit 45% of Tasks, Cut 60% with Oversight
Written by Emma Rogers

In the rapidly evolving world of software development, artificial intelligence is reshaping how code is written, but a new report from Veracode underscores a troubling reality: AI-generated code often comes laced with serious security flaws. Released today, the 2025 GenAI Code Security Report reveals that in nearly half of all development tasks—45% to be precise—AI tools introduce vulnerabilities that could expose organizations to cyberattacks. Researchers at Veracode tested over 100 leading AI models, from large language models to specialized coding assistants, and found consistent shortcomings in producing secure code.

The study, which analyzed tasks ranging from simple scripts to complex applications, highlighted particular weaknesses in handling common threats like cross-site scripting (XSS) and insecure data handling. For instance, when tasked with generating Java code, many models failed to implement proper input validation, leaving doors open for injection attacks. This isn’t just a theoretical concern; as developers increasingly rely on AI to boost productivity, these flaws could propagate into production environments at scale.

The Perils of Automated Coding

Industry experts have long warned about the double-edged sword of AI in coding, but Veracode’s data provides empirical evidence. According to the report, smaller AI models performed even worse than their larger counterparts, with error rates soaring in secure code generation. A parallel analysis by IT Pro corroborates this, noting that models struggled profoundly with Java security, often outputting code vulnerable to exploits that hackers could weaponize in minutes.

Beyond the numbers, the implications for enterprises are profound. Chris Wysopal, Veracode’s CTO and co-founder, emphasized in a recent discussion that while AI can accelerate development by up to 50%, it amplifies risk if not paired with robust security checks. This echoes findings from Veracode’s earlier work, such as their State of Software Security 2025 report, which urged organizations to shift from reactive patching to proactive risk management.

Navigating AI’s Security Minefield

To mitigate these risks, Veracode recommends integrating AI-powered remediation tools that can automatically detect and fix flaws in generated code. Their research shows that when combined with human oversight, such tools reduce vulnerability rates by over 60%. This approach aligns with broader industry trends, as detailed in a BlackFog analysis of 2025’s top AI vulnerabilities, which stresses the need for layered defenses against data poisoning and model inversion attacks.

However, challenges remain. Many organizations lack the maturity to implement these strategies effectively. Veracode’s maturity model, outlined in their webinar on software security, ranks firms on metrics like flaw detection speed and remediation efficiency, revealing that lagging companies fix only 20% of AI-introduced issues within a month.

Strategies for a Secure Future

Forward-thinking leaders are already adapting. For example, incorporating external attack surface management, as Veracode announced in an April Business Wire release, provides end-to-end visibility into AI-generated risks. Meanwhile, a timeline of GenAI breaches from 2023-2025, compiled by Wald.ai, highlights recurring mistakes like insufficient access controls, urging CISOs to prioritize AI-specific security training.

Ultimately, as AI becomes indispensable, the onus is on developers and security teams to treat it as a powerful but fallible tool. Veracode’s findings serve as a wake-up call: embracing AI without stringent safeguards could turn productivity gains into costly breaches. By weaving security into the AI workflow from the outset, organizations can harness its potential while minimizing dangers, ensuring that innovation doesn’t come at the expense of resilience.

Subscribe for Updates

AIDeveloper Newsletter

The AIDeveloper Email Newsletter is your essential resource for the latest in AI development. Whether you're building machine learning models or integrating AI solutions, this newsletter keeps you ahead of the curve.

By signing up for our newsletter you agree to receive content related to ientry.com / webpronews.com and our affiliate partners. For additional information refer to our terms of service.

Notice an error?

Help us improve our content by reporting any issues you find.

Get the WebProNews newsletter delivered to your inbox

Get the free daily newsletter read by decision makers

Subscribe
Advertise with Us

Ready to get started?

Get our media kit

Advertise with Us