Current/Former OpenAI & Google DeepMind Employees Pen Open Letter Warning of AI Risks

Employees—both current and former—of two of the leading AI firms have penned an open letter warning of the risks and calling for more transparency....
Current/Former OpenAI & Google DeepMind Employees Pen Open Letter Warning of AI Risks
Written by Matt Milano
  • Employees—both current and former—of two of the leading AI firms have penned an open letter warning of the risks and calling for more transparency.

    AI is alternately hailed as the greatest invention in man’s history or the greatest existential threat humanity has ever faced. In the midst of the AI revolution, companies are racing to deploy more powerful AI models, leaving many to question whether proper safeguards are being implemented.

    Current and former employees of OpenAI and Google DeepMind have penned an open letter emphasizing the risks associated with AI development.

    We also understand the serious risks posed by these technologies. These risks range from the further entrenchment of existing inequalities, to manipulation and misinformation, to the loss of control of autonomous AI systems potentially resulting in human extinction. AI companies themselves have acknowledged these risks, as have governments across the world and other AI experts.

    The letter goes on to highlight the strong financial incentives AI firms have to commercialize their technology while simultaneously avoiding effective oversight. The authors say they don’t believe traditional corporate governance models are up to the task of properly overseeing AI development.

    To make matters worse, the authors say that companies “have only weak obligations to share some” of the vast amount of information and data they have with governments. The authors say they “do not think they can all be relied upon to share it voluntarily.”

    The letter then outlines a controversial practice that has come to light, in which AI companies—specifically OpenAI—have resorted to overly broad confidentiality agreements, even tying a person’s equity to their post-exit silence.

    So long as there is no effective government oversight of these corporations, current and former employees are among the few people who can hold them accountable to the public. Yet broad confidentiality agreements block us from voicing our concerns, except to the very companies that may be failing to address these issues. Ordinary whistleblower protections are insufficient because they focus on illegal activity, whereas many of the risks we are concerned about are not yet regulated. Some of us reasonably fear various forms of retaliation, given the history of such cases across the industry. We are not the first to encounter or speak about these issues.

    The letter calls on AI companies to commit to four basic principles:

    • That the company will not enter into or enforce any agreement that prohibits “disparagement” or criticism of the company for risk-related concerns, nor retaliate for risk-related criticism by hindering any vested economic benefit;
    • That the company will facilitate a verifiably anonymous process for current and former employees to raise risk-related concerns to the company’s board, to regulators, and to an appropriate independent organization with relevant expertise;
    • That the company will support a culture of open criticism and allow its current and former employees to raise risk-related concerns about its technologies to the public, to the company’s board, to regulators, or to an appropriate independent organization with relevant expertise, so long as trade secrets and other intellectual property interests are appropriately protected;
    • That the company will not retaliate against current and former employees who publicly share risk-related confidential information after other processes have failed. We accept that any effort to report risk-related concerns should avoid releasing confidential information unnecessarily. Therefore, once an adequate process for anonymously raising concerns to the company’s board, to regulators, and to an appropriate independent organization with relevant expertise exists, we accept that concerns should be raised through such a process initially. However, as long as such a process does not exist, current and former employees should retain their freedom to report their concerns to the public.

    The open letter, which can be found here, is the latest example of the growing backlash against AI firms.

    Concerns About OpenAI

    OpenAI has been in the news recently for its high-profile mishandling of the launch of its “Sky” voice, a voice that sounded suspiciously like Scarlett Johansson. The issue turned into a full-scale debacle when Johansson revealed that she had rebuffed multiple attempts by OpenAI and CEO Sam Altman to work out a deal to use her voice prior to the release of Sky.

    OpenAI also went through an embarrassing boardroom coup that resulted in the ouster and subsequent rehiring of Altman as CEO. Interestingly, some of the concerns voiced by the board members who led the coup was that Altman was rushing to commercialize OpenAI’s tech rather than remaining committed to the company’s original goal of safely developing AI.

    In late May, OpenAI’s safety team, which addressed issues regarding any potential threat to humanity AI may pose, was disbanded, with the team co-leads exiting the company. One went on to slam OpenAI for its ‘safety culture and processing taking a backseat to shiny products.’

    Concerns About Google

    OpenAI is not the only company that has experienced controversy. Google’s own AI efforts have been mired in issues. The company (in)famously fired Dr. Timnit Gebru and Margaret Mitchell, its two AI ethics co-leads. Following Dr. Gebru’s firing, CEO Sundar Pichai made matters worse with a tone-deaf response to the situation.

    Following OpenAI and Microsoft’s agreement that saw ChatGPT power Microsoft Bing and Copilot, Google’s own employees criticized Pichai for a “rushed, botched, and myopic” release of the company’s own AI efforts.

    Pichai himself has had harsh words about the state of the company’s AI following controversial responses, saying Gemini’s responses were “completely unacceptable and we got it wrong.”

    The Open Letter’s Authors Are Right

    Given just the above examples—without considering the legal ramifications, copyright issues, and more that are presently being decided in courts—it’s hard to argue that the open letter’s authors are wrong. AI is clearly a powerful technology that can be misused or worse.

    Until there are proper legal safeguards in place to regulate AI, the companies developing the technology must do more to ensure its safe development, protect those who raise concerns, and foster an environment where those concerns are heard.

    Get the WebProNews newsletter delivered to your inbox

    Get the free daily newsletter read by decision makers

    Subscribe
    Advertise with Us

    Ready to get started?

    Get our media kit