OpenAI’s Long-Term Existential Safety Team Is No More

OpenAI's "superalignment team" dedicated to studying and preventing potential existential threats posed by AI has completely disbanded....
OpenAI’s Long-Term Existential Safety Team Is No More
Written by Matt Milano
  • OpenAI’s “superalignment team” dedicated to studying and preventing potential existential threats posed by AI has completely disbanded.

    OpenAI was founded on the premise of developing AI in a responsible manner that would better humanity, rather than pose a threat to it. Concerns began developing last year that the company had lost its way in a rush to commercialize its innovations, concerns that were behind the board’s firing of CEO Sam Altman.

    Although Altman’s firing was reversed just days later, the concerns about OpenAI’s commitment to the safe development of AI have persisted. Co-founder and co-leader of the safety team, Ilya Sutskever, announced he was leaving the company earlier this week. Sutskever was one of the board members that took the lead in firing Altman. Jan Leike, the team’s other co-leader, announced his resignation on X the same day.

    Similarly, Daniel Kokotajlo, a philosophy PhD student who was part of the company’s governance team, left OpenAI “due to losing confidence that it would behave responsibly around the time of AGI,” or artificial general intelligence. Interestingly, Kokotajlo believes AGI will happen by 2029, with a slight chance it could happen as early as this year.

    According to Wired, with the safety team’s co-leads both resigning, the team has essentially been shut down, with the remnants being absorbed into other teams.

    In a lengthy thread on X, Leike provided more information behind his resignation and the situation within OpenAI in general.

    I joined because I thought OpenAI would be the best place in the world to do this research.

    However, I have been disagreeing with OpenAI leadership about the company’s core priorities for quite some time, until we finally reached a breaking point.

    I believe much more of our bandwidth should be spent getting ready for the next generations of models, on security, monitoring, preparedness, safety, adversarial robustness, (super)alignment, confidentiality, societal impact, and related topics.

    These problems are quite hard to get right, and I am concerned we aren’t on a trajectory to get there.

    Over the past few months my team has been sailing against the wind. Sometimes we were struggling for compute and it was getting harder and harder to get this crucial research done.

    Building smarter-than-human machines is an inherently dangerous endeavor.

    OpenAI is shouldering an enormous responsibility on behalf of all of humanity.

    But over the past years, safety culture and processes have taken a backseat to shiny products.

    Jan Leike (@janleiki) | May 17, 2024

    Leiki’s take on the situation within OpenAI, and specifically the company’s focus on “shiny products” over safety is a damning indictment of the current leader in the AI space. While it may have been “sailing against the wind,” the absence of the company’s safety team is sure to be missed and will hopefully not have disastrous consequences.

    Get the WebProNews newsletter delivered to your inbox

    Get the free daily newsletter read by decision makers

    Subscribe
    Advertise with Us

    Ready to get started?

    Get our media kit