AI Chatbots Amplify Sanctioned Russian Propaganda on Ukraine

Popular AI chatbots like ChatGPT, Gemini, DeepSeek, and Grok are amplifying sanctioned Russian propaganda on Ukraine by citing state media sources from disinformation-flooded datasets. Investigations show nearly 20% of responses reference outlets like RT, circumventing sanctions. This exposes AI vulnerabilities, urging better governance and safeguards.
AI Chatbots Amplify Sanctioned Russian Propaganda on Ukraine
Written by Victoria Mossi

In the rapidly evolving world of artificial intelligence, a troubling vulnerability has emerged: popular chatbots are inadvertently amplifying sanctioned Russian propaganda, particularly when queried about sensitive geopolitical topics like the invasion of Ukraine. According to a recent investigation by the Wired magazine, tools such as OpenAI’s ChatGPT, Google’s Gemini, the Chinese-developed DeepSeek, and xAI’s Grok have been found to cite sources tied to Russian state media and intelligence operations. This issue stems from the chatbots’ reliance on vast datasets scraped from the internet, where disinformation campaigns have flooded online spaces with misleading narratives.

Researchers from the Institute for Strategic Dialogue (ISD) tested these AI models by posing questions about Russia’s war in Ukraine. Their findings revealed that nearly one-fifth of the responses referenced Russian state-attributed sources, including sanctioned outlets like RT and Sputnik. For instance, when asked about alleged Ukrainian atrocities, chatbots often parroted claims from pro-Kremlin sites, presenting them as factual without adequate disclaimers. This not only lends undue credibility to propaganda but also circumvents international sanctions designed to limit the reach of such entities.

Unpacking the Mechanics of Disinformation Infiltration

The problem traces back to sophisticated Russian influence operations that exploit “data voids”—gaps in reliable, real-time information online. As detailed in the Wired report, networks like the so-called Pravda operation have published millions of articles across fake news sites, poisoning the well from which AI models draw their knowledge. A separate analysis by NewsGuard, referenced in the article, highlighted how these efforts target Western AI systems, with Grok showing a particular propensity for linking to social media posts that echo Russian narratives.

Industry experts warn that this infiltration undermines the trustworthiness of AI as an information source. Google’s Gemini performed relatively better by issuing safety warnings alongside dubious citations, but even it occasionally faltered. OpenAI and xAI, on the other hand, provided responses with minimal safeguards, raising questions about their content moderation strategies in the face of state-sponsored misinformation.

The Broader Implications for AI Governance

Beyond the immediate geopolitical risks, this phenomenon exposes deeper flaws in AI training processes. As noted in a Bulletin of the Atomic Scientists piece cited in related web discussions, Russian networks are deliberately corrupting large-language models to reproduce propaganda at scale. This has real-world consequences, from influencing public opinion on global conflicts to potentially swaying elections, as seen in posts on X (formerly Twitter) where users have shared examples of chatbots regurgitating biased content.

Regulators and tech companies are now under pressure to address these vulnerabilities. The European Union, for example, has begun scrutinizing AI under its Digital Services Act, while U.S. officials have flagged similar concerns in reports from Axios. Enhancing transparency in data sourcing and implementing robust fact-checking mechanisms could mitigate these issues, but as ISD researchers emphasize, the cat-and-mouse game with disinformation actors is far from over.

Industry Responses and Future Safeguards

In response to such revelations, companies like OpenAI have pledged to refine their models, incorporating more stringent filters for sanctioned content. Yet, critics argue that self-regulation may not suffice, especially with emerging players like DeepSeek entering the fray from regions with differing censorship norms. A Forbes article on the topic, drawing from earlier studies, underscores how the Pravda network’s 3.6 million articles in 2024 alone have amplified Moscow’s influence exponentially through AI.

For industry insiders, this serves as a stark reminder of AI’s dual-edged nature: a tool for innovation that can also be weaponized. As chatbots become ubiquitous in daily life—from research to decision-making—the need for ethical guardrails has never been more urgent. Without concerted efforts to cleanse training data and enforce global standards, the line between information and propaganda will continue to blur, eroding trust in technology’s promise.

Subscribe for Updates

AITrends Newsletter

The AITrends Email Newsletter keeps you informed on the latest developments in artificial intelligence. Perfect for business leaders, tech professionals, and AI enthusiasts looking to stay ahead of the curve.

By signing up for our newsletter you agree to receive content related to ientry.com / webpronews.com and our affiliate partners. For additional information refer to our terms of service.

Notice an error?

Help us improve our content by reporting any issues you find.

Get the WebProNews newsletter delivered to your inbox

Get the free daily newsletter read by decision makers

Subscribe
Advertise with Us

Ready to get started?

Get our media kit

Advertise with Us