Ahead of OpenAI CEO Sam Altman’s four days in exile, several staff researchers wrote a letter to the board of directors warning of a powerful AI discovery that could threaten humanity.AI 

Major AI Breakthrough: OpenAI Researchers Sound Alarm Before Altman’s Departure

(Reuters) – Before OpenAI CEO Sam Altman spent four days in exile, several staff researchers wrote a letter to the government warning of a powerful artificial intelligence discovery they said could threaten humanity, two people familiar with the matter told Reuters.

The previously unreported letter and artificial intelligence algorithm were key developments before the government ousted Altman, the poster child for generative artificial intelligence, two of the sources said. Before his triumphant return late Tuesday, more than 700 workers had threatened to walk out and join a Microsoft backer in solidarity with their fired leader.

Sources cited the letter as one factor in a longer list of board complaints that led to Altman’s firing, which included concerns about monetizing before realizing the consequences. Reuters could not verify a copy of the letter. The staff who wrote the letter did not respond to requests for comment.

After being contacted by Reuters, OpenAI, which declined to comment, acknowledged in an internal message to staff the project, called Q*, and a letter to the board before the weekend’s events, one of the people said. An OpenAI spokeswoman said the message sent by longtime director Mira Murat warned staff about certain media stories, without commenting on their accuracy.

Some at OpenAI believe Q* (pronounced Q-Star) could be a breakthrough in the startup’s quest for so-called artificial intelligence (AGI), one of the people told Reuters. OpenAI defines AGI as autonomous systems that outperform humans in the most economically valuable tasks.

Because of the massive computing resources, the new model was able to solve certain mathematical problems, the person said, speaking on condition of anonymity because the person was not authorized to speak for the company. Although conducting such tests in math was only at the elementary school level, the researchers were very optimistic about Q*’s future success, the source said.

Reuters was unable to independently verify the properties of Q* claimed by the researchers.

‘THE CURTAIN OF ROAD DREAMS’

Researchers consider mathematics to be the frontier of generative artificial intelligence development. Currently, generative AI is good at writing and translating languages by statistically predicting the next word, and the answers to the same question can vary greatly. But overcoming the ability to do math – when there is only one right answer – means that the AI would have better reasoning abilities that resemble human intelligence. Artificial intelligence researchers believe that this could be applied to, for example, new scientific research.

Unlike a calculator, which can solve a limited number of operations, an AGI can generalize, learn, and understand.

In a letter to the government, the researchers pointed out the capabilities and potential danger of artificial intelligence, the sources said, without elaborating on the specific security issues mentioned in the letter. There has long been debate among computer scientists about the dangers posed by highly intelligent machines, for example if they decide it is in their best interest to destroy humanity.

The researchers have also announced the work of an “AI Research Group”, the existence of which has been confirmed by multiple sources. The group, formed by merging the previous “Code Gen” and “Math Gen” teams, will study how to optimize existing AI models to improve their reasoning and ultimately perform scientific work, one of the people said.

Altman led the effort to make ChatGPT one of the fastest-growing software applications in history, and drew the investment—and computing resources—from Microsoft necessary to move closer to AGI.

In addition to announcing a slew of new tools at a demonstration this month, Altman teased last week at a summit of world leaders in San Francisco that he believed major advances were on the horizon.

“Four times now in the history of OpenAI, most recently in the last couple of weeks, I’ve been able to be in a room where we’re kind of pushing back the veil of ignorance and pushing forward the frontier of discovery, and doing that is the professional honor of a lifetime,” he said at the Asia-Pacific Economic Cooperation Summit.

A day later, the board fired Altman.

(Anna Tong and Jeffrey Dastin in San Francisco and Krystal Hu in New York; Editing by Kenneth Li and Lisa Shumaker)

Related posts

Leave a Comment