A departing OpenAI executive focused on safety is raising concerns about the company on his way out the door.
Jan Leike, who resigned from his role leading the company’s “superalignment” team this week, said in a thread on X Friday that he disagreed with OpenAI leadership’s “core priorities” and had “reached a breaking point.”
“Alignment” or “superalignment” are terms used in the artificial intelligence space to refer to work on training AI systems to operate within human needs and priorities. Leike joined OpenAI in 2021, and last summer the company announced he would co-lead the superalignment team focused on “scientific and technical breakthroughs to steer and control AI systems much smarter than us.”
However, Leike said Friday that in recent months the team has been under-resourced and “sailing against the wind.”
“Sometimes we were struggling for compute and it was getting harder and harder to get this crucial research done,” he said on X, adding that Thursday was his last day at the startup. “Building smarter-than-human machines is an inherently dangerous endeavor … But over the past years, safety culture and processes have taken a backseat to shiny products.”
OpenAI CEO Sam Altman responded to Leike’s post saying the company is committed to AI safety.
“(I)’m super appreciative of @janleike’s contributions to openai’s alignment research and safety culture, and very sad to see him leave,” Altman said on X. “(H)e’s right we have a lot more to do; we are committed to doing it. (I)’ll have a longer post in the next couple of days.”
Leike’s exit, which he announced Wednesday, comes amid a broader leadership shuffle at OpenAI. His resignation followed an announcement on Tuesday by OpenAI co-founder and chief scientist Ilya Sutskever, who also helped lead the superalignment team, that he would leave the company.
Sutskever said he was leaving to work on a “project that is very personally meaningful to me.” But his exit was notable given the central role he played in the dramatic firing — and return — of Altman last year, when he voted to remove Altman as chief executive and chairman of the board.
CNN contributor Kara Swisher previously reported that Sutskever had been concerned that Altman was pushing AI technology “too far, too fast.” But days after Altman’s ouster, Sutskever had a change of heart: He signed an employee letter calling for the entire board to resign and for Altman to return.
Still, questions about how — and how quickly — to develop and publicly release AI technology may have continued to cause tension within the company in the months after Altman regained control of the firm. The executive exits come after OpenAI announced this week that it would make its most powerful AI model yet, GPT-4o, available for free to the public through ChatGPT. The technology will make ChatGPT more like a digital personal assistant, capable of real-time spoken conversations.
“I believe much more of our bandwidth should be spent getting ready for the next generations of models, on security, monitoring, preparedness, safety, adversarial robustness, (super)alignment, confidentiality, societal impact, and related topics,” Leike wrote in his X thread on Friday. “These problems are quite hard to get right, and I am concerned we aren’t on a trajectory to get there.”
In the wake of Sutskever and Leike’s exits, OpenAI confirmed to CNN that in recent weeks it had begun to dissolve its superalignment team, and instead was integrating members of the team across its various research groups. A spokesperson for the company said that structure would help OpenAI better achieve its superalignment objectives.
CNN’s Samantha Delouya contributed to this report.
This story has been updated with additional information.