Sam Altman, co-founder and CEO of OpenAI, was fired by the company's board of directors, leading to a chaotic weekend of executive and investor backlash. (AFP)News 

Shock Ouster of Sam Altman from OpenAI: The Doomed Mission Unveiled

OpenAI could have been described as a healthy company with a competent, commercially successful, and globally beloved founder, Sam Altman, as he took the stage in San Francisco on Nov. 6.

The co-founder and CEO had sparked the global race for AI supremacy, helped OpenAI outpace much larger competitors, and was regularly compared to Bill Gates and Steve Jobs at this point. Eleven days later, he was fired—starting a chaotic weekend in which executives and investors loyal to Altman agitated for his return. The board ignored them and instead hired Emmett Shear, the former CEO of Twitch.

At the company’s first developer conference on November 6, Altman’s popularity seemed universal. Attendees applauded raptly as he touted the company’s accomplishments: 2 million customers, including “more than 92% of the Fortune 500.” A big reason for that was Microsoft Corp., which invested $13 billion in the company and put Altman at the center of a corporate overhaul that has seen it leapfrog rivals like Google and Amazon in certain categories of cloud services and revamped its Bing search engine. , and puts the company in a leading position in the hottest software category. Now Altman called CEO Satya Nadella on stage and asked him how Microsoft felt about the partnership. Nadella started to answer and then burst out laughing, as if the answer to the question was absurdly obvious. “We love you guys,” he finally said as he calmed down. He credited Altman with “building something magical.”

But if customers and investors were happy, one constituency remained deeply skeptical of Altman and the idea of a commercial AI company: Altman’s own board. Although the board included Altman and a close ally, OpenAI President Greg Brockman, it was ultimately dominated by researchers who feared the company’s expansion was out of control, perhaps even dangerous.

This put the researchers at odds with Altman and Brockman, who both argued that OpenAI grew its business out of necessity. Every time a customer asks OpenAI’s ChatGPT chatbot a question, it requires massive amounts of expensive computing power—so much so that the company struggled to keep up with the explosion in user demand. The company has been forced to place limits on how many times users can query its most powerful AI models per day. In fact, it got so bad in the days after the developer conference, Altman announced that the company suspended signups for the paid ChatGPT Plus service indefinitely.

From Altman’s point of view, increasing money and finding additional income were important. But some board members with ties to AI-skeptic effective altruism saw this as a tension with the risks posed by advanced AI. Many effective altruists—a pseudo-philosophical movement that seeks to donate money to combat existential risks—have imagined scenarios in which a terrorist group could use a powerful AI system to create, say, a bioweapon. Or, in the worst case scenario, the AI could spontaneously turn bad, take over weapons systems and try to destroy human civilization. Not everyone is taking this scenario seriously, and other AI leaders, including Altman, have argued that such concerns can be managed and that the potential benefits of AI’s widespread availability outweigh the risks.

On Friday, however, the skeptics prevailed, and one of the most famous founders alive was suddenly relieved of duty. Adding to the sense of chaos, the government made little effort to ensure a smooth transition. In a statement announcing the decision, the board suggested that Altman had been dishonest — “not being consistently forthright in his communications,” it said in its explosive announcement. The board did not specify any misconduct, and OpenAI COO Brad Lightcap later said in a memo to employees that it did not accuse Altman of wrongdoing, and that his firing was not a discussion about security but a “communication breakdown.” The board had also moved on without talking to Microsoft, leaving Nadella “on edge” over the swift dismissal of a key business partner, according to a person familiar with his thinking. Nadella was “blindsided” by the news, this person said.

According to people familiar with his plans, Altman was planning a rival company while investors agitated for his reinstatement. According to a person familiar with the discussions, some investors were considering resetting the value of their OpenAI holdings over the weekend. The potential move, which would make it harder for the company to raise additional funds and allow OpenAI investors to back Altman’s theoretical rival, appeared to be aimed at pressuring the board to step down and bring Altman back. Meanwhile, on Saturday night, scores of OpenAI executives and dozens of employees began tweeting the heart emoji — a gesture of solidarity that seemed equal parts love for Altman and rebuke to the government.

A source familiar with Nadella’s thinking said the Microsoft CEO defended Altman’s potential return and would also be interested in backing Altman’s new venture. The source predicted that if the board does not reconsider, a large number of OpenAI engineers are likely to resign in the coming days. Adding to the feeling of uncertainty: OpenAI’s offices are closed this whole week. Microsoft and Altman declined to comment. When reached by phone Saturday, Brockman, who resigned shortly after Altman was fired, said: “Super head down now, sorry.” Then he hung up.

A philosophical disagreement wouldn’t normally destroy a company that had negotiated an $86 billion sale of stock to investors, but OpenAI was no ordinary company. Altman built it as a nonprofit with a for-profit subsidiary that he ran and had aggressively courted venture capitalists and corporate partners. The novel — and, as OpenAI critics see it, flawed — structure put Altman, Microsoft and all of the company’s customers at the mercy of a difficult board dominated by those skeptical of the company’s expansion.

OpenAI’s original goal when it was founded by Altman and Elon Musk, among others, was to “advance digital intelligence in a way that is most likely to benefit humanity as a whole,” as the 2015 announcement put it. The organization does not seek financial gain for its own sake, but would instead act as a check on for-profit efforts, ensuring that AI is developed “as an extension of individual human will and in the spirit of freedom as widely and evenly distributed as safely as possible.” Musk, who had warned of the risks an uncontrolled artificial intelligence system could pose to humanity, provided much of the nonprofit’s initial funding. Other backers included investor Peter Thiel and LinkedIn founder Reid Hoffman.

Early on, Musk helped recruit Ilja Sutskever as the company’s chief scientist. The hiring was a coup. Sutskever is an industry legend, dating back to his neural network research at the University of Toronto and continuing at Google, where he worked in the company’s Google Brain lab.

In a podcast earlier this month, Musk said he decided to fund OpenAI and personally recruited Sutskever away from Google because he was concerned that the search giant was developing AI without concern for security. Musk’s hope was to slow down Google. Musk added that Sutskever’s hiring ended his friendship with Google founder Larry Page. However, Musk himself later became estranged from Altman, leaving OpenAI in 2018 and cutting it off from further funding.

Altman needed money, and venture capital firms and big tech companies were interested in backing ambitious AI efforts. To leverage the capital, he created a new subsidiary for the nonprofit, which he described as a “limited profit” company. The for-profit arm of OpenAI would raise money from investors, but promised that if its revenue reached a certain level — initially 100 times the investment of early backers — anything beyond would be donated to the nonprofit.

Despite his position as founder and CEO, Altman has said he owns no stock in the company, framing this as part of the company’s philanthropy. But of course this would-be charity had also sold 49 percent of its equity to Microsoft, which was not granted a single seat on its board. Altman suggested in an interview earlier this year that Microsoft’s only way to control the company would be to disconnect the servers OpenAI leased. “I think they will honor their agreement,” he said at the time.

The ultimate power of the company rested with the board of directors, which included Altman, Sutskever and president Greg Brockman. Other members included Quora Inc. CEO Adam D’Angelo, tech entrepreneur Tasha McCauley, and Helen Toner, director of strategy at Georgetown’s Center for Security and Emerging Technologies. McCauley and Toner both had ties to powerful altruistic organizations. Toner had previously worked at Open Philanthropy; McCauley serves on the boards of Effective Ventures and 80,000 Hours.

OpenAI is not the only ambitious technology project housed within a non-profit organization. The Mozilla web browser, the Signal messaging app and the Linux operating system are all developed by nonprofits, and before selling his company to Musk, Twitter founder Jack Dorsey complained that the social network was owned by investors. But open source projects are notoriously difficult to manage, and OpenAI operated on a larger scale and more ambitiously than any technology organization before it. This, along with reports of the company’s extreme financial success, created a backlash that, in retrospect, was almost inevitable.

In February, Musk lamented at X that OpenAI was no longer “a counterweight to Google, but has now become the largest profitable closed-source company, effectively controlled by Microsoft.” He echoed those complaints during a recent appearance on Lex Fridman’s podcast, adding that the company’s pursuit of profit “wasn’t good karma.”

At the same time, Altman pursued side projects that had the potential to enrich him and his investors, but were not under the control of OpenAI’s security-conscious board. There was Worldcoin, his eyeball-scanning crypto project that launched in July and was touted as a possible universal basic income system to offset AI-related job losses. Altman also explored starting his own artificial intelligence chipmaker and pitched Middle Eastern sovereign wealth funds for an investment that could reach tens of billions of dollars, according to a person familiar with the plan. He also floated a potential multibillion-dollar investment by SoftBank Group Corp., led by Japanese billionaire and tech investor Masayoshi Son, in a company he planned to set up with former Apple design guru Jony Ive to make artificial intelligence-oriented hardware.

These efforts, along with the growing success of the for-profit organization, put Altman at odds with Sutskever, who was increasingly vocal about security concerns. In July, Sutskever formed a new team at the company focused on curbing future “super-intelligent” artificial intelligence systems. Tensions with Altman intensified in October when, according to a source familiar with the relationship, Altman moved to reduce Sutskever’s role in the company, which rubbed Sutskever the wrong way and spread to the company’s board.

At the Nov. 6 event, Altman made several announcements that infuriated Sutskever and people sympathetic to his views, the source said. Among them: custom versions of ChatGPT that let anyone create chatbots that would perform specialized tasks. OpenAI has said that it would eventually allow these custom GPTs to function independently once created by the user. Competing companies offer similar independent agents, but they’re a red flag for security advocates.

In the following days, Sutskever brought his concerns to the board. According to Brockman’s account posted to X, Sutskever texted Altman on the evening of Nov. 16, inviting him to join a video call with the board the next day. Brockman was not invited. At noon the next day, Altman showed up and was told he was fired. A minute later the announcement went off and chaos ensued.

Uncertainty that continued over the weekend threatened OpenAI’s increased valuation and Microsoft’s stock price, which fell sharply after the market closed on Friday. “It’s a disruption that could slow the pace of innovation, and it’s not good for Microsoft,” said Rishi Jaluria, an analyst at RBC Capital Markets. “OpenAI progressed at breakneck speed.”

Meanwhile, companies dependent on OpenAI’s software were hastily eyeing competing technologies, such as Meta Plaforms Inc.’s large language model known as Llama. “As a start-up, we are now worried. Do we continue with them or not?” said Amr Awadallah, CEO of Vectara, which creates chatbots for business data.

He said the choice to pursue OpenAI or seek a competitor would depend on the company’s and Microsoft’s assurances. “We need Microsoft to speak up and say everything is stable and we will continue to focus on our customers and partners,” Awadallah said. “We need to hear something like that to restore our confidence.”

As Altman and his allies tried to stage a comeback Sunday, Musk posted on X that he was “very concerned. Ilya has a good moral compass and is not looking for power. He wouldn’t take such drastic action unless he felt it was absolutely necessary.” Sutskever later told staff that the decision to hire Shear was a clear sign that Altman was not coming back.

Related posts

Leave a Comment