“We are still working towards a resolution and we remain optimistic,” read a memo from OpenAI chief strategy officer Jason Kwon.
“By resolution, we mean bringing back Sam, Greg, Jakub, Szymon, Aleksander and other colleagues (sorry if I missed you!) and remaining the place where people who want to work on AGI [artificial general intelligence] research, safety, products and policy can do their best work.”
OpenAI President Greg Brockman, Director of Research Jakub Pachocki, researcher Szymon Sidor, and AI safety lead Aleksander Madry had all announced their resignation in solidarity with Altman in the hours after his sacking.
Kwon added that he hoped to be able to update staff further on Sunday.
The effort to bring back Altman is reportedly being led by tech giant Microsoft, the largest investor in the company known for the popular ChatGPT software. Venture capital fund Thrive Capital, the company’s second-largest investor, is also pushing for his return.
Altman told US media he’s considering a return to OpenAI but would require the board that ousted him to be replaced. He also wants a new governance structure as a precondition. Altman is alternatively considering forming a new company with ex OpenAI colleagues according to reports.
OpenAI Chief Operating Officer Brad Lightcap clarified that Altman’s firing was due to a “breakdown of communications” and not “malfeasance” in an internal company memo on Saturday.
Altman’s sudden sacking on Friday surprised Silicon Valley, and members of the company itself. Speculation emerged that the decision stemmed from disagreements within the company over AI safety. There was also reportedly internal concern over whether OpenAI software, such as ChatGPT, was being commercialized prematurely.
OpenAI was established in 2015 to research “safe and beneficial” artificial intelligence technology, with the goal of developing “highly autonomous systems that outperform humans at most economically valuable work.” Its trademark ChatGPT software has become popular, but critics question the safety implications of highly sophisticated artificial intelligence.