In the rapidly evolving world of software development, artificial intelligence is reshaping how code is written, but a new report from Veracode underscores a troubling reality: AI-generated code often comes laced with serious security flaws. Released today, the 2025 GenAI Code Security Report reveals that in nearly half of all development tasksā45% to be preciseāAI tools introduce vulnerabilities that could expose organizations to cyberattacks. Researchers at Veracode tested over 100 leading AI models, from large language models to specialized coding assistants, and found consistent shortcomings in producing secure code.
The study, which analyzed tasks ranging from simple scripts to complex applications, highlighted particular weaknesses in handling common threats like cross-site scripting (XSS) and insecure data handling. For instance, when tasked with generating Java code, many models failed to implement proper input validation, leaving doors open for injection attacks. This isn’t just a theoretical concern; as developers increasingly rely on AI to boost productivity, these flaws could propagate into production environments at scale.
The Perils of Automated Coding
Industry experts have long warned about the double-edged sword of AI in coding, but Veracode’s data provides empirical evidence. According to the report, smaller AI models performed even worse than their larger counterparts, with error rates soaring in secure code generation. A parallel analysis by IT Pro corroborates this, noting that models struggled profoundly with Java security, often outputting code vulnerable to exploits that hackers could weaponize in minutes.
Beyond the numbers, the implications for enterprises are profound. Chris Wysopal, Veracode’s CTO and co-founder, emphasized in a recent discussion that while AI can accelerate development by up to 50%, it amplifies risk if not paired with robust security checks. This echoes findings from Veracode’s earlier work, such as their State of Software Security 2025 report, which urged organizations to shift from reactive patching to proactive risk management.
Navigating AI’s Security Minefield
To mitigate these risks, Veracode recommends integrating AI-powered remediation tools that can automatically detect and fix flaws in generated code. Their research shows that when combined with human oversight, such tools reduce vulnerability rates by over 60%. This approach aligns with broader industry trends, as detailed in a BlackFog analysis of 2025’s top AI vulnerabilities, which stresses the need for layered defenses against data poisoning and model inversion attacks.
However, challenges remain. Many organizations lack the maturity to implement these strategies effectively. Veracode’s maturity model, outlined in their webinar on software security, ranks firms on metrics like flaw detection speed and remediation efficiency, revealing that lagging companies fix only 20% of AI-introduced issues within a month.
Strategies for a Secure Future
Forward-thinking leaders are already adapting. For example, incorporating external attack surface management, as Veracode announced in an April Business Wire release, provides end-to-end visibility into AI-generated risks. Meanwhile, a timeline of GenAI breaches from 2023-2025, compiled by Wald.ai, highlights recurring mistakes like insufficient access controls, urging CISOs to prioritize AI-specific security training.
Ultimately, as AI becomes indispensable, the onus is on developers and security teams to treat it as a powerful but fallible tool. Veracode’s findings serve as a wake-up call: embracing AI without stringent safeguards could turn productivity gains into costly breaches. By weaving security into the AI workflow from the outset, organizations can harness its potential while minimizing dangers, ensuring that innovation doesn’t come at the expense of resilience.