OpenAI Researchers Foretell Major AGI Breakthrough – Sam Altman’s Fateful Last Days
A recent report has revealed surprising information in the ongoing OpenAI saga. According to Reuters, prior to Sam Altman’s dismissal from the OpenAI board, a group of researchers within the organization sent a letter to the directors, alerting them to a significant AI advancement that could potentially pose a threat to humanity. This groundbreaking development is being referred to as artificial general intelligence (AGI), or superintelligence.
For the uninitiated, AGI or AI superintelligence means that the computing capabilities of a machine are higher than those of humans. This can lead to solving complex problems faster than humans, especially those that require elements of creativity or innovation. This is still a long way from sentience, the stage where AI achieves consciousness and can operate without input and knowledge of its training material.
OpenAI researchers made a breakthrough towards AGI
A previously unreported letter and AI algorithm was a key development before the government ousted Altman, the poster boy for generative AI, two sources told Reuters. Before his unexpected return late Tuesday, more than 700 employees had threatened to walk out and join a Microsoft backer in solidarity with their fired leader.
Sources cite the letter as one factor in a longer list of complaints against the board that led to Altman’s firing. Reuters could not verify a copy of the letter. The researchers who wrote the letter did not immediately respond to requests for comment.
According to one source, longtime director Mira Murati mentioned the project, called Q*, to employees on Wednesday and said a letter was sent to the board ahead of this weekend’s events.
After the story was published, an OpenAI spokesperson said Murati told employees what the media was going to report, but he did not comment on the accuracy of the report.
The maker of ChatGPT had advanced Q* (pronounced Q-Star), which some insiders believe could be a breakthrough in the startup’s quest for superintelligence, also known as artificial intelligence (AGI), one of the people told Reuters. OpenAI defines AGI as artificial intelligence systems that are smarter than humans.
Because of the huge computing resources, the new model was able to solve certain mathematical problems, the person said, speaking on condition of anonymity because they were not authorized to speak for the company. Although conducting such tests in math was only at the elementary school level, the researchers were very optimistic about Q*’s future success, the source said.
Reuters stressed that it could not independently verify the properties of Q* claimed by the researchers.
The path to artificial intelligence
Researchers consider mathematics to be the frontier of generative artificial intelligence development. Currently, generative AI is good at writing and translating languages by statistically predicting the next word, and the answers to the same question can vary greatly. But overcoming the ability to do math – when there is only one right answer – means that the AI would have better reasoning abilities that resemble human intelligence. Artificial intelligence researchers believe that this could be applied to, for example, new scientific research.
Unlike a calculator, which can solve a limited number of operations, an AGI can generalize, learn, and understand.
In a letter to the government, the researchers pointed out the capabilities and potential danger of artificial intelligence, the sources said, without elaborating on the specific security issues mentioned in the letter. There has long been debate among computer scientists about the dangers of superintelligent machines, for example if they decide it is in their best interest to destroy humanity.
Against this backdrop, Altman led efforts to make ChatGPT one of the fastest-growing software applications in history, drawing from Microsoft the investment — and computing resources — necessary to move closer to superintelligence, or AGI.
In addition to announcing a slew of new tools at a demonstration this month, Altman teased last week at a meeting of world leaders in San Francisco that he believed AGI was on the horizon.
“Four times now in the history of OpenAI, most recently in the last couple of weeks, I’ve been able to be in a room where we’re kind of pushing back the veil of ignorance and pushing forward the frontier of discovery, and doing that is the professional honor of a lifetime,” he said at the Asia-Pacific Economic Cooperation Summit.
A day later, the board fired Altman.
(via Reuters feeds)