45% of AI-Generated Code Vulnerable: Veracode 2025 Report

Veracode's 2025 GenAI Code Security Report reveals that 45% of AI-generated code contains vulnerabilities like XSS and injection attacks, despite boosting developer productivity. While AI accelerates workflows, it often overlooks security best practices. Human oversight and automated tools can reduce these risks by over 60%.
45% of AI-Generated Code Vulnerable: Veracode 2025 Report
Written by Juan Vasquez

In the rapidly evolving world of software development, artificial intelligence is transforming how code is written, but a new report underscores the hidden perils lurking within AI-generated scripts. Veracode, a leader in application security, has released its 2025 GenAI Code Security Report, revealing that nearly half of all code produced by generative AI tools contains significant security vulnerabilities. This finding comes from an exhaustive analysis where researchers prompted over 100 AI models to generate code for common development tasks, only to discover flaws in 45% of instances.

These vulnerabilities often stem from fundamental issues like cross-site scripting (XSS) and injection attacks, where AI fails to implement proper safeguards. For industry insiders, this isn’t just a statistic—it’s a wake-up call about the trade-offs between speed and security in an era when developers increasingly rely on tools like GitHub Copilot or similar large language models to accelerate workflows.

AI’s Productivity Boost Comes at a Cost

The report highlights how AI excels at producing functional code quickly, boosting developer productivity by automating routine tasks. However, this efficiency masks deeper risks: in tests involving languages such as Java, Python, and JavaScript, Java showed the highest failure rate, with insecure code appearing in over 60% of scenarios. As noted in coverage from BusinessWire, Veracode’s research emphasizes that while AI can cut development time, it inadvertently introduces exploitable weaknesses that attackers could leverage more rapidly than ever.

Moreover, the study points to a lack of inherent security awareness in these models. Many AI systems prioritize functionality over best practices, such as input validation or encryption, leading to persistent threats in production environments. This is particularly alarming for enterprises handling sensitive data, where a single flaw could cascade into major breaches.

Human Oversight as a Critical Safeguard

Encouragingly, the report doesn’t paint a purely dire picture. Veracode’s findings, echoed in analysis from WebProNews, show that integrating human oversight and automated remediation tools can slash these vulnerabilities by more than 60%. Developers who review and refine AI outputs, perhaps using static code analysis, transform risky code into robust applications.

This hybrid approach—combining AI’s speed with human expertise—emerges as a best practice. For chief information security officers, it means rethinking training programs to include AI literacy, ensuring teams can spot and fix issues like SQL injection that models often overlook.

The Broader Implications for Software Supply Chains

Beyond individual tasks, the report warns of systemic risks to software supply chains. As AI-generated code proliferates in open-source repositories and enterprise systems, unaddressed vulnerabilities could amplify threats across ecosystems. Insights from Security Magazine reinforce this, noting that while AI aids attackers in identifying flaws faster, proactive measures like continuous scanning are essential.

Industry leaders must now balance innovation with vigilance. Veracode recommends embedding security prompts in AI queries and adopting platforms that flag issues in real-time, potentially reducing the average remediation time from months to days.

Charting a Secure Path Forward

Ultimately, the 2025 report serves as a blueprint for safer AI adoption. With public sector organizations facing an average of 315 days to fix flaws, as detailed in Veracode’s related State of Software Security snapshot, the stakes are high. Forward-thinking firms will invest in AI governance frameworks, blending technology with policy to mitigate risks while harnessing generative tools’ potential. As development accelerates, securing the code that powers our digital world demands nothing less than this integrated strategy.

Subscribe for Updates

AIDeveloper Newsletter

The AIDeveloper Email Newsletter is your essential resource for the latest in AI development. Whether you're building machine learning models or integrating AI solutions, this newsletter keeps you ahead of the curve.

By signing up for our newsletter you agree to receive content related to ientry.com / webpronews.com and our affiliate partners. For additional information refer to our terms of service.

Notice an error?

Help us improve our content by reporting any issues you find.

Get the WebProNews newsletter delivered to your inbox

Get the free daily newsletter read by decision makers

Subscribe
Advertise with Us

Ready to get started?

Get our media kit

Advertise with Us