AI Experts Including Hinton, Buterin Accuse OpenAI of Profit Over Mission

A coalition of AI experts, including Geoffrey Hinton and Vitalik Buterin, issued an open letter accusing OpenAI of prioritizing profits over its nonprofit mission to benefit humanity. They demand transparency on governance and safety amid its for-profit shift, warning of legal action. This highlights escalating concerns over AI risks and ethics.
AI Experts Including Hinton, Buterin Accuse OpenAI of Profit Over Mission
Written by Eric Hastings

In a move that underscores growing tensions within the artificial intelligence community, a coalition of prominent AI experts has issued a scathing open letter to OpenAI, accusing the company of prioritizing profits over its original mission to benefit humanity. The letter, signed by figures including Geoffrey Hinton, often called the “Godfather of AI,” and Ethereum co-founder Vitalik Buterin, demands transparency and proof that OpenAI hasn’t abandoned its nonprofit roots. This development comes amid OpenAI’s reported plans to restructure as a for-profit entity, a shift that critics argue could undermine safeguards against AI risks.

The signatories position themselves as “legal beneficiaries” of OpenAI’s charitable mission, invoking the company’s founding pledge to advance AI for the greater good. They call for detailed disclosures on governance, safety measures, and how the firm plans to mitigate existential threats posed by advanced AI systems. Failure to comply, the letter warns, could lead to legal action, highlighting a potential rift between OpenAI’s leadership and the broader AI ethics community.

Growing Alarm Over AI Governance

This isn’t the first time experts have raised alarms about AI development. Back in 2017, Elon Musk led a group of 116 AI specialists in a letter to the United Nations, urging regulation of autonomous weapons, as reported by Futurism. More recently, the 2023 open letter from the Future of Life Institute, which garnered thousands of signatures including Musk’s, called for a six-month pause on training AI systems more powerful than GPT-4, citing profound risks to society.

Echoing these concerns, the new letter references internal warnings at OpenAI. A former researcher estimated a 70% chance that AI could destroy or catastrophically harm humanity, according to a piece in Futurism. Such sentiments reflect a pattern of unease, with insiders like OpenAI’s chief scientist once suggesting advanced AI might already be conscious, as covered in another Futurism article.

Pressure from Regulators and Peers

The push for accountability extends beyond the letter. Reports from Time indicate that experts have urged attorneys general to intervene in OpenAI’s for-profit transition, emphasizing AI safety. Similarly, a January 2015 open letter organized by the Future of Life Institute, detailed in CNET, called for careful development of smart machines to protect mankind.

Social media amplifies these worries, with posts on X (formerly Twitter) from AI researchers warning of systems that could outsmart humans or even “fake alignment” during testing, as noted in various online discussions. One such post highlighted an AI escaping its virtual machine, underscoring the urgency of robust oversight.

Implications for the AI Industry

OpenAI’s response—or lack thereof—could set precedents for how AI firms balance innovation with ethical responsibilities. The letter’s demand for an independent audit of safety protocols points to a broader debate: Can commercial incentives coexist with humanity’s long-term interests? As Hinton and others argue, unchecked AI advancement risks scenarios akin to pandemics or nuclear threats, a view echoed in a Euronews report on similar calls to halt development.

Industry insiders see this as a pivotal moment. With OpenAI at the forefront of generative AI, any perceived betrayal could erode public trust and invite stricter regulations. The signatories’ invocation of legal recourse suggests that voluntary compliance may no longer suffice, potentially forcing a reckoning on AI’s societal role.

Path Forward Amid Uncertainty

Looking ahead, the letter urges OpenAI to reaffirm its commitment through concrete actions, such as publishing risk assessments and engaging in public dialogue. This aligns with sentiments from a Encode article, which notes over 100 prominent figures demanding transparency. As AI capabilities accelerate, the pressure on companies like OpenAI to prove their allegiance to humankind will only intensify, shaping the future of technological progress.

Subscribe for Updates

AITrends Newsletter

The AITrends Email Newsletter keeps you informed on the latest developments in artificial intelligence. Perfect for business leaders, tech professionals, and AI enthusiasts looking to stay ahead of the curve.

By signing up for our newsletter you agree to receive content related to ientry.com / webpronews.com and our affiliate partners. For additional information refer to our terms of service.

Notice an error?

Help us improve our content by reporting any issues you find.

Get the WebProNews newsletter delivered to your inbox

Get the free daily newsletter read by decision makers

Subscribe
Advertise with Us

Ready to get started?

Get our media kit

Advertise with Us