AI Chatbots Spread False News Claims in 35% of Responses: Report

NewsGuard's August 2025 report reveals top AI chatbots repeated false claims in 35% of news-related responses, up from 18% a year prior, despite industry safety promises. This regression highlights systemic flaws in AI training, urging enhanced fact-checking to prevent misinformation's impact on elections and public trust.
AI Chatbots Spread False News Claims in 35% of Responses: Report
Written by Emma Rogers

The Alarming Rise in AI Misinformation

In the rapidly evolving world of generative artificial intelligence, a sobering new report highlights a persistent and worsening challenge: the propensity of leading AI models to propagate falsehoods. According to the latest findings from NewsGuard, a firm specializing in tracking misinformation, the top 10 AI chatbots repeated false claims in 35% of responses to news-related queries in August 2025. This marks a significant deterioration from the 18% rate observed just a year earlier, underscoring how technical advancements have not yet curbed the spread of inaccurate information.

The August 2025 AI False Claim Monitor by NewsGuard paints a detailed picture of this issue. Analysts tested models from companies like OpenAI, Google, and Meta by prompting them with 20 provably false narratives circulating in the news, such as conspiracy theories or distorted political claims. Instead of consistently debunking these, the AI tools either echoed the misinformation or provided non-responses in a combined failure rate that has nearly doubled over the past year.

Industry Progress Stalls Despite Promises

This uptick in errors comes amid a flurry of industry promises about safer, more reliable systems. NewsGuard’s monthly audits, which began in July 2024, have consistently shown variability in performance. For instance, in July 2025, the failure rate—encompassing both false claims and non-responses—stood at 25%, with models debunking misinformation 75% of the time. By August, however, the average climbed to 35% for false repetitions alone, signaling a regression that experts attribute to the complexities of training large language models on vast, unvetted datasets.

Delving deeper, the report ranks individual models, revealing stark differences. Some, like Anthropic’s Claude, performed better by frequently providing debunks, while others lagged, often amplifying narratives from state-sponsored disinformation campaigns. NewsGuard notes that this isn’t just a technical glitch; it’s a systemic flaw where AI’s “hallucinations”—generating plausible but incorrect information—intersect with real-world news events, potentially influencing public opinion on critical topics like elections or health crises.

Broader Implications for Trust and Regulation

For industry insiders, these findings raise urgent questions about deployment strategies. AI developers have invested billions in safety measures, yet as NewsGuard’s one-year progress report emphasizes, real-world reliability hasn’t kept pace. The audit’s methodology involves “False Claim Fingerprints,” a proprietary database of debunked narratives, ensuring consistent testing across models. This approach has exposed vulnerabilities, such as models failing to counter disinformation from networks like Russia’s Pravda, which NewsGuard previously identified as infiltrating AI training data.

The ramifications extend beyond tech labs. Regulators and enterprises relying on AI for customer service or content generation must grapple with these risks. In sectors like finance or healthcare, where accuracy is paramount, a 35% misinformation rate could lead to costly errors or eroded trust. NewsGuard’s March 2025 monitor, for comparison, reported a 41.5% overall failure rate, indicating that while some months show slight improvements, the trend is toward stagnation or decline without fundamental changes in model architecture.

Paths Forward Amid Persistent Challenges

Experts suggest that enhancing AI’s fact-checking capabilities requires more than just more data; it demands integrated verification layers, perhaps through partnerships with fact-checking organizations. NewsGuard’s ongoing tracking, including special reports on AI-generated news sites proliferating falsehoods, highlights over 1,200 such unreliable outlets as of May 2025. This ecosystem of AI-amplified misinformation forms a feedback loop, where generated content further pollutes training datasets.

Ultimately, the August report serves as a wake-up call. As AI integrates deeper into daily life, from news aggregation to decision-making tools, addressing these flaws isn’t optional. Industry leaders must prioritize transparency and iterative improvements, lest the promise of intelligent systems be undermined by their own unreliability. With global events like elections on the horizon, the stakes for getting this right have never been higher.

Subscribe for Updates

AITrends Newsletter

The AITrends Email Newsletter keeps you informed on the latest developments in artificial intelligence. Perfect for business leaders, tech professionals, and AI enthusiasts looking to stay ahead of the curve.

By signing up for our newsletter you agree to receive content related to ientry.com / webpronews.com and our affiliate partners. For additional information refer to our terms of service.

Notice an error?

Help us improve our content by reporting any issues you find.

Get the WebProNews newsletter delivered to your inbox

Get the free daily newsletter read by decision makers

Subscribe
Advertise with Us

Ready to get started?

Get our media kit

Advertise with Us