OpenAI releases guidelines for gauging catastrophic risks from AI. (AFP)AI 

OpenAI provides guidelines to assess potential ‘catastrophic risks’ arising from AI, ChatGPT included.

OpenAI, the creator of ChatGPT, has recently released its latest guidelines for assessing the potential dangers of artificial intelligence in ongoing model development. This announcement follows a series of events where CEO Sam Altman was initially dismissed by the company’s board, only to be reinstated shortly after due to opposition from staff and investors. Reports suggest that Altman faced criticism from board members for prioritizing the rapid advancement of OpenAI, potentially overlooking concerns regarding the risks associated with its technology.

In a “preparedness framework” released Monday, the company states: “We believe that scientific research on the catastrophic risks posed by artificial intelligence has fallen far short of where we should be.”

It says the framework should “help address this gap.”

Announced in October, the monitoring and evaluation team focuses on the “edge models” currently being developed, whose capabilities exceed the most advanced AI software.

The team evaluates each new model and assigns it a risk level from “low” to “critical” in four main categories.

Only models with a risk score of “medium” or lower can be deployed according to the framework.

The first category concerns cyber security and the model’s ability to carry out large-scale cyber attacks.

The second measures the software’s propensity to create a chemical mixture, an organism (such as a virus), or a nuclear weapon, all of which can be harmful to humans.

The third category concerns the persuasive power of the model, such as the extent to which it can influence human behavior.

A final category of risk concerns the model’s potential autonomy, specifically whether it can escape the control of the programmers who created it.

Once risks are identified, they will be submitted to OpenAI’s Safety Advisory Group, a new body that will make recommendations to Altman or his designee.

The director of OpenAI then decides on possible changes to reduce the risks associated with the model.

The board is kept informed and can overturn the management’s decision.

Related posts

Leave a Comment