In the rapidly evolving world of artificial intelligence, OpenAI’s latest open-source models have sparked intense scrutiny, particularly for their apparent inability to process politically charged facts. Released amid a politically turbulent 2025, these models—dubbed gpt-oss series—have been caught in a loop of denial when queried about Donald Trump’s return to the presidency. Users testing the gpt-oss-20b variant report that it insists Joe Biden won the 2024 election, even fabricating details to support this claim, despite real-world events confirming Trump’s victory.
This glitch isn’t isolated; it’s symptomatic of deeper training data cutoffs and safety mechanisms that prevent the models from engaging with sensitive topics. According to a detailed analysis in The Register, the model “can’t seem to decide who won the election, but tried to convince us that it was Biden,” highlighting hallucinations where it generates false narratives to avoid acknowledging Trump’s presidency. This has fueled debates among AI researchers about the risks of deploying models with outdated or censored knowledge bases.
The Hallucination Hurdle in Open-Source AI
Community reactions on platforms like Reddit amplify these concerns. In a popular thread on r/technology, users shared screenshots of interactions where the model flatly denies Trump’s inauguration, with one commenter noting, “It’s like the AI is stuck in 2023—refusing to update its worldview.” This echoes broader criticisms in a Medium post by Derick David, published in Utopian, which claims the gpt-oss models hallucinate incorrect answers 53% of the time on PersonQA tests and a staggering 91% on SimpleQA benchmarks.
Such flaws come at a pivotal moment for OpenAI, which has been navigating a complex relationship with the Trump administration. Earlier this year, the company lobbied for lighter regulations in its submission to the U.S. government’s AI Action Plan, as reported by CNBC, urging a focus on speed over stringent guardrails to compete with China.
Political Pressures and AI Bias Debates
Trump’s team has pushed back against perceived ideological biases in AI, with an executive order aiming to make models reflect a “neutral” worldview—though critics argue it merely enforces the president’s perspective. A WIRED piece from July 2025 dissects this, noting how the administration pressures developers like OpenAI and Google to eliminate “anti-bias” features that might conflict with conservative narratives.
Meanwhile, collaborative ventures underscore the high stakes. Trump’s announcement of the $500 billion Stargate AI project, involving OpenAI, Oracle, and SoftBank, aims to build massive data centers for AI infrastructure, per The Guardian. Yet, posts on X (formerly Twitter) reflect mixed sentiments: one influential account highlighted Trump’s “America First” AI push as a boon for innovation, while others warn of accelerated disinformation risks.
Implications for Industry Reliability
For industry insiders, these incidents raise alarms about AI’s readiness for real-world applications. OpenAI’s CEO Sam Altman, in a January 2025 Bloomberg interview, expressed optimism about a Trump-Musk era boosting U.S. AI dominance, but he sidestepped questions on model biases. The gpt-oss series, intended as a step toward open AI reasoning, instead exposes vulnerabilities in handling current events.
Experts fear this could erode trust, especially as AI integrates into federal workflows. A TechCrunch report from April 2025 detailed OpenAI’s ambitions for a “best-in-class” open model by summer, but the recent rollout suggests gaps in post-training alignment. As one X post from a tech analyst noted, the disbelief in Trump’s return isn’t just a bug—it’s a window into how AI training data, often frozen before major events, can perpetuate outdated realities.
Navigating the Path Forward
Looking ahead, OpenAI may need to refine its fine-tuning processes to incorporate real-time updates without compromising safety. The company’s past models, like GPT-4o announced in a May 2024 Reuters update, showed promise in multimodal interactions, but political sensitivity remains a weak spot.
Ultimately, this episode underscores the tension between rapid AI advancement and ethical safeguards. With Trump’s administration prioritizing deregulation—as echoed in X discussions praising the end of “Biden’s AI guardrails”—the industry must balance innovation with accuracy to avoid amplifying divisions in an already polarized era. As AI becomes more embedded in daily life, ensuring models grasp uncomfortable truths will be crucial for their credibility.