DeepMind CEO Warns AI Could Amplify Societal Divisions Like Social Media

Demis Hassabis, Google DeepMind CEO, warns that unchecked AI development could amplify societal divisions like social media, prioritizing engagement over ethics. He urges safeguards against misinformation and polarization, highlighting AI's "jagged intelligence" and potential AGI in 5-10 years. Ethical innovation is essential to benefit society.
DeepMind CEO Warns AI Could Amplify Societal Divisions Like Social Media
Written by Elizabeth Morrison

In a recent interview, Demis Hassabis, the CEO of Google DeepMind, issued a stark warning about the trajectory of artificial intelligence, drawing parallels to the pitfalls that have plagued social media platforms. Hassabis emphasized that without careful stewardship, AI could exacerbate societal divisions much like social networks have, prioritizing engagement over ethical considerations. “We have to make sure that AI is built in a way that benefits society,” he told Business Insider in an article published on September 15, 2025. This comes amid growing concerns in the tech industry about AI’s rapid deployment, where profit motives might overshadow long-term societal impacts.

Hassabis, a Nobel Prize laureate for his work in AI and protein structure prediction, highlighted how social media’s algorithms amplified misinformation and polarization. He argued that AI developers must learn from these errors, integrating safeguards from the outset to prevent similar outcomes. Recent advancements at DeepMind, such as the DolphinGemma model aimed at decoding animal communication, underscore the potential for positive applications, but Hassabis stressed the need for responsible innovation.

The Perils of Unchecked AI Development

Industry insiders are increasingly echoing Hassabis’s concerns, pointing to AI’s potential to manipulate information flows on a scale far beyond social media. For instance, posts on X (formerly Twitter) from users like Tsarathustra have discussed Hassabis’s predictions on AI evolving into “agentic systems” that combine planning with multimodal understanding, potentially within 2-4 years. These systems, if not regulated, could autonomously propagate divisive content, amplifying biases inherent in training data.

Moreover, Hassabis has repeatedly addressed AI’s inconsistencies, a theme covered in a Times of India article from August 2025. He noted that while AI excels in complex tasks like winning math Olympiads, it falters on basic high school problems, labeling this “jagged intelligence” as a barrier to artificial general intelligence (AGI). This inconsistency could lead to unreliable AI tools that, like social media echo chambers, reinforce flawed narratives.

AGI Timelines and Societal Preparedness

Looking ahead, Hassabis estimates AGI—AI surpassing human cognitive abilities across tasks—could arrive in 5-10 years, a timeline he shared in interviews reported by India Today. Yet, he warns society is unprepared, advocating for global collaboration to mitigate risks. X posts from accounts like Chubby highlight this sentiment, quoting Hassabis on the need for proactive measures to ensure AGI’s benefits outweigh harms, such as job displacement or ethical dilemmas.

DeepMind’s initiatives, including the Scalable Instructable Multiworld Agent (SIMA) for virtual environments, demonstrate progress toward more adaptable AI. However, as detailed in a Wikipedia entry updated in August 2025, these tools rely on natural language instructions, raising questions about misuse in real-world scenarios akin to social media’s algorithmic manipulations.

Balancing Innovation with Ethical Imperatives

Hassabis’s views contrast with optimistic outlooks from peers like OpenAI’s Sam Altman, who predicts AGI within five years but focuses on compute scaling, as noted in X discussions from AI Notkilleveryoneism Memes. Hassabis, however, prioritizes consistency and societal good, dismissing claims of current AI possessing “PhD-level intelligence” as nonsense, per a recent Prudent AI post on X. He argues true AGI must reason continuously without trivial errors.

In a TIME article from August 2025, a summit involving DeepMind and OpenAI staff underscored economic risks like inequality exacerbated by AI. Hassabis advocates for AI to tackle grand challenges, such as curing diseases, but insists on avoiding social media’s profit-driven missteps.

Industry Responses and Future Directions

Tech giants are responding variably; Meta’s aggressive AI talent recruitment, spending millions as reported in a Times of India piece, highlights competitive pressures that could sideline ethics. Hassabis’s “blunt” retort emphasizes DeepMind’s edge in talent retention through mission-driven work.

Ultimately, as AI integrates deeper into daily life, Hassabis’s warning serves as a call to action. Drawing from social media’s lessons, the industry must prioritize transparency and equity to harness AI’s potential without repeating past mistakes. Recent news on X, including from SingularityNET, reinforces that scaling alone won’t achieve AGI—ethical frameworks are essential for a beneficial future.

Subscribe for Updates

MediaTransformationUpdate Newsletter

News and insights with a focus on media transformation.

By signing up for our newsletter you agree to receive content related to ientry.com / webpronews.com and our affiliate partners. For additional information refer to our terms of service.

Notice an error?

Help us improve our content by reporting any issues you find.

Get the WebProNews newsletter delivered to your inbox

Get the free daily newsletter read by decision makers

Subscribe
Advertise with Us

Ready to get started?

Get our media kit

Advertise with Us