In a letter, some of OpenAI's most senior staff members threatened to leave the company if the board did not get replacedNews 

OpenAI Staff Ready to Follow Sam Altman Out the Door: Mass Exodus Threatened if Board Not Replaced

Hundreds of OpenAI employees threatened to quit the leading artificial intelligence company and join Microsoft on Monday.

They followed OpenAI founder Sam Altman, who said he was starting an AI subsidiary at Microsoft after he was shocked to be fired from the company whose ChatGPT chatbot has led the rapid rise of AI technology.

In the letter, some of OpenAI’s most senior employees threatened to leave the company if the board was not replaced.

“Your actions have made it clear that you cannot control OpenAI,” said the letter, which was first published by Wired.

On the list of signatories was Ilja Sutskever, the company’s leading researcher and one of the members of the four-member board that voted for Altman’s ouster.

Also included was senior executive Mira Murati, who was named to replace Altman as CEO when he was fired on Friday, but was demoted over the weekend.

“Microsoft has assured us that this new subsidiary will have positions for all OpenAI employees if we choose to join,” the letter says.

According to reports, as many as 500 of OpenAI’s 770 employees signed the letter.

OpenAI has named Emmett Shear, the former CEO of Amazon’s streaming platform Twitch, as its new CEO, despite pressure from Microsoft and other major investors to bring Altman back.

Altman was fired by the startup’s board on Friday after U.S. media cited concerns that he underestimated the dangers of its technology and led the company away from its stated mission – claims his successor has denied.

Microsoft CEO Satya Nadella wrote in X that Altman “will join Microsoft to lead a new advanced artificial intelligence research group” along with OpenAI founder Greg Brockman and other colleagues.

Altman rose to fame last year with the announcement of ChatGPT, which ignited a race to advance AI research and development and billions in investment in the field.

His dismissal prompted several other high-profile departures from the company, as well as reported pressure from investors to bring him back.

“We’re going to build something new and it’s incredible. The mission continues,” Brockman said, naming former research director Jakub Pachocki, AI risk assessment director Aleksander Madry and longtime researcher Szymon Sidor.

But OpenAI stood by its decision in a memo sent to employees Sunday night, saying that “Sam’s behavior and lack of transparency … undermined the board’s ability to effectively oversee the company,” The New York Times reported.

– “badly” handled fired –

Shear confirmed his appointment as OpenAI’s interim CEO on Monday at X, but also denied reports that Altman had been fired over security concerns surrounding the use of AI technology.

“Today I received a call asking me to consider a once-in-a-lifetime opportunity: to be the interim CEO of @OpenAI. After consulting with my family and thinking about it for just a few hours, I accepted,” he wrote.

“Before I took on the task, I checked the reasons for the change. The board didn’t remove Sam because of any particular disagreement about security, their reasoning was completely different.”

“It is clear that the process and communication surrounding Sam’s removal has been handled very poorly, which has seriously damaged our trust,” Shear added.

The global technology titan Microsoft has invested more than 10 billion dollars in OpenAI and has adopted artificial intelligence technology from Pioneer in its own products.

Microsoft’s Nadella added in a message that “we look forward to getting to know and working with Emmett Shear and the new leadership team at OAI.”

“We remain committed to our partnership with OpenAI and are confident in our product roadmap,” he said.

OpenAI is competing fiercely with others such as Google and Meta, as well as start-ups such as Anthropic and Stability AI, to develop their own AI models.

Generative AI platforms like ChatGPT are trained to use vast amounts of data to answer questions, even complex ones, in human-like language.

They are also used to create and manipulate images.

But the technology has warned of the dangers of its misuse – from blackmailing people with “deepfake” images to image manipulation and malicious disinformation.

Related posts

Leave a Comment