In a recent interview, Geoffrey Hinton, often dubbed the “Godfather of AI,” expressed frustration with how many technology executives are minimizing the dangers posed by artificial intelligence, while highlighting one notable exception: Demis Hassabis, the CEO of Google DeepMind. Hinton, who resigned from Google in 2023 to speak more freely about AI risks, pointed to Hassabis as a leader genuinely committed to tackling these challenges. This commentary comes amid growing debates over AI’s trajectory, with Hinton warning that unchecked development could lead to catastrophic outcomes.
Hinton’s remarks, detailed in a Business Insider article published on July 27, 2025, underscore a divide in the industry. He accused figures like OpenAI’s Sam Altman and Meta’s Mark Zuckerberg of downplaying risks to maintain investor enthusiasm and competitive edges. In contrast, Hinton praised Hassabis for his balanced approach, noting that the DeepMind co-founder has consistently advocated for safety measures even as he pushes boundaries in AI research.
Hassabis’s Track Record in AI Safety
This isn’t the first time Hinton has singled out Hassabis. Their shared history dates back to DeepMind’s early days, where ethical considerations were baked into the company’s mission. Hassabis, a former chess prodigy turned neuroscientist, has led initiatives like the development of AlphaFold, which revolutionized protein structure prediction and earned him a Nobel Prize in Chemistry in 2024. Yet, as posts on X (formerly Twitter) from users discussing recent AI forums highlight, Hassabis has also been vocal about the need for global collaboration to mitigate risks, emphasizing that AI’s benefits must not overshadow potential harms.
Recent news from Pandaily reports on Hinton’s appearance at the World Artificial Intelligence Conference (WAIC) 2025 in Shanghai, where he called for international cooperation on AI safety. There, he echoed concerns about AI surpassing human intelligence, estimating a 10% to 20% chance of human extinction within three decades, as previously noted in a December 2024 Guardian article. Hassabis, attending similar events, has aligned with this view by prioritizing “mission-driven” AI work over aggressive talent poaching, as covered in a July 27, 2025, Business Standard piece.
Broader Industry Implications
The praise for Hassabis arrives at a pivotal moment. Industry insiders note that Google DeepMind under his leadership has invested heavily in safety research, including projects to align AI with human values. This contrasts with competitors like Meta, which Hassabis himself described as “rational” in their hiring tactics but potentially shortsighted, per the same Business Standard report. Hinton’s warnings build on his earlier statements, such as a June 2025 CNBC interview where he discussed AI’s potential sentience and displacement of humans.
Sentiment on X reflects a mix of alarm and admiration, with posts from July 2025 users like those referencing Hinton’s Nobel-related warnings stressing the urgency of regulating superintelligent AI. One thread highlighted Hassabis’s knighthood in 2023 and his role in steering Google’s AGI ambitions, positioning him as a counterweight to more reckless pursuits.
Path Forward: Regulation and Collaboration
Hinton proposes concrete steps, including mandating that AI companies allocate one-third of computing resources to safety, as mentioned in X posts dating back to 2024. He argues this could help experiment with weaker general intelligence systems before scaling up, reducing existential risks. A Forbes article from December 2024 elaborates on this, advocating for global regulation and education as bulwarks against catastrophe.
Hassabis’s influence extends to policy, as evidenced by his recognition as the UK’s most influential tech figure in a July 2025 Computer Weekly feature. Insiders suggest his approach could inspire broader industry shifts, especially as warnings from a 2023 New York Times report on AI’s extinction risks—signed by leaders including Hassabis—gain renewed attention.
Balancing Innovation and Caution
Ultimately, Hinton’s endorsement of Hassabis signals hope amid pessimism. While Hinton estimates over a 50% chance of existential threats without intervention, as shared in older X discussions, collaborative efforts could steer AI toward safer paths. For industry leaders, this means prioritizing long-term safeguards over short-term gains, a lesson Hassabis embodies.
As AI advances accelerate, the dialogue between pioneers like Hinton and Hassabis will shape not just technology, but humanity’s future. Their shared emphasis on caution offers a blueprint for responsible innovation in an era of unprecedented change.